Objective research to aid investing decisions
Value Allocations for Feb 2019 (Final)
Momentum Allocations for Feb 2019 (Final)
1st ETF 2nd ETF 3rd ETF
CXO Advisory

Investing Expertise

Can analysts, experts and gurus really give you an investing/trading edge? Should you track the advice of as many as possible? Are there ways to tell good ones from bad ones? Recent research indicates that the average “expert” has little to offer individual investors/traders. Finding exceptional advisers is no easier than identifying outperforming stocks. Indiscriminately seeking the output of as many experts as possible is a waste of time. Learning what makes a good expert accurate is worthwhile.

Financial Analysts 25% Optimistic?

How accurate are consensus firm earnings forecasts worldwide at a 12-month horizon? In his May 2016 paper entitled “An Empirical Study of Financial Analysts Earnings Forecast Accuracy”, Andrew Stotz measures accuracy of consensus 12-month earnings forecasts by financial analysts for the companies they cover around the world. He defines consensus as the average for analysts coverings a specific stock. He prepares data by starting with all stocks listed in all equity markets and sequentially discarding:

  1. Stocks with market capitalizations less than $50 million (U.S. dollars) as of December 2014 or the last day traded before delisting during the sample period.
  2. Stocks with no analyst coverage.
  3. Stocks without at least one target price and recommendation.
  4. The 2.1% of stocks with extremely small earnings, which may results in extremely large percentage errors.
  5. All observations of errors outside ±500% as outliers.
  6. Stocks without at least three analysts, one target price and one recommendation.

He focuses on scaled forecast error (SFE), 12-month consensus forecasted earnings minus actual earnings, divided by absolute value of actual earnings, as the key accuracy metric. Using monthly analyst earnings forecasts and subsequent actual earnings for all listed firms around the world during January 2003 through December 2014, he finds that: Keep Reading

Guru Re-grades

What happens to the rankings of Guru Grades after weighting each forecast by forecast horizon and specificity? In their March 2017 paper entitled “Evaluation and Ranking of Market Forecasters”, David Bailey, Jonathan Borwein, Amir Salehipour and Marcos Lopez de Prado re-evaluate and re-rank market forecasters covered in Guru Grades after weighting each forecast by these two parameters. They employ original Guru Grades forecast data as the sample of forecasts, including assessments of the accuracy of each forecast. However, rather than weighting each forecast equally, they:

  • Apply to each forecast a weight of 0.25, 0.50, 0.75 or 1.00 according to whether the forecast horizon is less than a month/indeterminate, 1-3 months, 3-9 months or greater than 9 months, respectively.
  • Apply to each forecast a weight of either 0.5 for less specificity or 1.0 for more specificity.

Using a sample of 6,627 U.S. stock market forecasts by 68 forecasters from CXO Advisory Group LLC, they find that: Keep Reading

How Large University Endowments Allocate Investments

How are the asset allocations of the largest university endowments, conventionally accepted as among the best investors, evolving? In their December 2016 paper entitled “The Evolution of Asset Classes: Lessons from University Endowments”, John Mulvey and Margaret Holen summarize recent public reports from large U.S. university endowments, focusing on asset category definitions and allocations. Using public disclosures of 50 large university endowments for 2015, they find that: Keep Reading

The Value of Fund Manager Discretion?

Are there material average performance differences between hedge funds that emphasize systematic rules/algorithms for portfolio construction versus those that do not? In their December 2016 paper entitled “Man vs. Machine: Comparing Discretionary and Systematic Hedge Fund Performance”, Campbell Harvey, Sandy Rattray, Andrew Sinclair and Otto Van Hemert compare average performances of systematic and discretionary hedge funds for the two largest fund styles covered by Hedge Fund Research: Equity Hedge (6,955 funds) and Macro (2,182 funds). They designate a fund as systematic if its description contains “algorithm”, “approx”, “computer”, “model”, “statistical” and/or “system”. They designate a fund as discretionary if its description contains none of these terms. They focus on net fund alphas, meaning after-fee returns in excess of the risk-free rate, adjusted for exposures to three kinds of risk factors well known at the start of the sample period: (1) traditional equity market, bond market and credit factors; (2) dynamic stock size, stock value,  stock momentum and currency carry factors; and, (3) a volatility factor specified as monthly returns from buying one-month, at‐the‐money S&P 500 Index calls and puts and holding to expiration. Using monthly after-fee returns for the specified hedge funds (excluding backfilled returns but including dead fund returns) during June 1996 through December 2014, they find that: Keep Reading

Robo Advisor Expected Performance and Acceptance

Does a flexible robo advisor (offering automated, passive investment strategies tailored to investor situation/preferences) perform well in comparison to mutual fund/stock portfolios they might replace? If so, what inhibits investors from switching to them? In their November 2016 paper entitled “Robo Advisers and Mutual Fund Stickiness”, Michael Reher and Celine Sun compare actual mutual fund/stock portfolios held by individuals to Wealthfront robo advisor portfolios constructed by assigning weights to 10 exchange-traded funds based on investor responses to questions about financial situation and risk tolerance. The robo advisor portfolio construction process includes a critique of original portfolio diversification, fees and cash holdings. They focus on stock, mutual fund and ETF holdings in retirement (non-taxable) portfolios. They project net portfolio performance at the asset level based principally on the Capital Asset Pricing Model (CAPM, alpha plus market beta) of asset returns. They group findings by: individuals who manage their own portfolios versus those who rely on mutual funds; and, individuals who choose to set up robo advisor accounts versus those who do not. Using original investor portfolio and corresponding robo advisor portfolio holdings collected during mid-January 2016 through early November 2016, fund loads and fees as of September 2016, and monthly returns for all assets and factors as available since January 1975, they find that: Keep Reading

Self-grading of the Morningstar Fund Rating System

How well does the Morningstar fund rating system (one star to five stars) work? In their November 2016 paper entitled “The Morningstar Rating for Funds: Analyzing the Performance of the Star Rating Globally”, suggested for review by a subscriber, Jeffrey Ptak, Lee Davidson, Christopher Douglas and Alex Zhao analyze the global performance of star ratings in terms of ability to predict fund performance. They use two test methodologies:

  1. Monthly two-stage regressions that test the ability of fund star ratings to add value to a linear factor model for each asset class at a one-month horizon. The first stage estimates fund dependencies (betas) on commonly used predictive factors over the past 36 months. The second stage measures the ability of fund star ratings to add predictive power to those betas in the following month. For stock funds, they consider fee, equity market, size, value and momentum factors. For bond funds, they consider fee, credit and term factors. For stock-bond funds, they consider all these factors. For alternative asset class funds, they consider fee and equity market factors.
  2. An event study that tracks performances of equally weighted portfolios of funds formed by prior-month star rating over the next 1, 3, 6, 12, 36 and 60 months.

Using fund categories, monthly fund star ratings and returns, and asset class factor returns during January 2003 through December 2015, they find that: Keep Reading

Institutional Stock Trading Expertise

Does trading by expert investors boost performance (profitably exploit information), or depress performance (unprofitably exploit information or wastefully churn on noise)? In their September 2016 paper entitled “Trading Frequency and Fund Performance”, Jeffrey Busse, Lin Tong, Qing Tong and Zhe Zhang investigate the relationship between trading frequency and performance among institutional investors (funds). They specify fund daily trading frequency as number of trades divided by the number of unique stocks traded. They calculate fund quarterly trading frequency as average daily trading frequency during the quarter. For each buy or sell, they calculate the return from execution date (at execution price) to end of the quarter, including stock splits, dividends and sometimes commissions. They estimate quarterly fund trading performance by aggregating performances of buys and sells separately, weighted either equally or by trade size, such that the average holding interval is about half a quarter. They subtract fund benchmark return over the same holding interval to calculate abnormal return. They then examine the relationship between abnormal return and fund size. Using daily common stock transaction details for 843 fund managers and 5,277 unique funds, along with associated stock return and firm data, during January 1999 through December 2009, they find that: Keep Reading

Trendy Mutual Fund Performance

Should mutual fund investors go with trendy new funds? In their August 2016 paper entitled “What’s Trending? The Performance and Motivations for Mutual Fund Startups”, Jason Greene and Jeffrey Stark examine the interactions of mutual fund trendiness with growth in assets under management, fees and performance. They quantify fund trendiness by each month:

  1. Relating each key word found in fund names to industry fund flows over the past 12 months.
  2. Subtracting the average key word-flow relationship for the entire sample period from the monthly relationship to indicate current key word trendiness.
  3. Ranking key words by trendiness.
  4. Averaging the trendiness ranks for each key word in each fund name to measure fund trendiness.

They then relate fund trendiness to fund flows over the next 12 months, fund fee level at fund inception and fund performance over its first five years of existence. Using fund names and monthly fund returns, fund assets and factor returns for alpha calculations during 1993 through 2014 (7,072 distinct funds), they find that: Keep Reading

Factor Timing among Hedge Fund Managers

Can hedge fund managers reliably time eight factors explaining multi-class asset returns: equity market; size; bond market; credit spread; trend-following for bonds, currencies and commodities; and, emerging markets? In their July 2016 paper entitled “Timing is Money: The Factor Timing Ability of Hedge Fund Managers”, Bart Osinga, Marc Schauten and Remco Zwinkels study the magnitude, determinants and persistence of factor timing ability among hedge fund managers. To minimize biases, they: include live and dead funds; remove the first 18 months of returns for each fund; consider only funds that have at least 36 monthly returns and average assets under management $10 million; and, consider only funds that report net monthly excess returns in U.S. dollars. They also exclude the top and bottom 1% of all returns to suppress outlier effects. Using monthly returns for 2,132 dead and 992 live hedge funds encompassing nine investment styles, and contemporaneous factor returns, during January 1994 through April 2014, they find that: Keep Reading

Evaluating 5,017 Technical Trading Recommendations

Do equity trade recommendations from technical analysis experts beat the market? In his February 2016 paper entitled “Are Chartists Artists? The Determinants and Profitability of Recommendations Based on Technical Analysis”, Dirk Gerritsen evaluates technically based buy and sell recommendations for individual Dutch stocks and the AEX index. Specifically, he measures abnormal performance from 10 trading days before (including the publication date) through 20 trading days after recommendations. For individual stocks, “abnormal” means in excess of the return estimated by the four-factor (market, size, book-to-market, momentum) model. For the AEX index, abnormal means in excess of average index return over the year preceding the 30-day measurement interval. For recommendations that include stop-loss instructions, he measures also abnormal asset performance after any stop-loss actions. Finally, he examines whether recommendations agree with the consensus of eight kinds of simple technical trading rules. Using daily stock and and AEX index prices, total returns and trading volumes associated with 5,017 recommendations (3,967 with 500 stop-losses for individual stocks and 1,050 with 242 stop-losses for the index) from 101 experts on the Dutch stock market during 2004 through 2010, he finds that: Keep Reading

Daily Email Updates
Research Categories
Recent Research
Popular Posts
Popular Subscriber-Only Posts