Objective research to aid investing decisions
Menu
Value Allocations for Apr 2019 (Final)
Cash TLT LQD SPY
Momentum Allocations for Apr 2019 (Final)
1st ETF 2nd ETF 3rd ETF

Investing Expertise

Can analysts, experts and gurus really give you an investing/trading edge? Should you track the advice of as many as possible? Are there ways to tell good ones from bad ones? Recent research indicates that the average “expert” has little to offer individual investors/traders. Finding exceptional advisers is no easier than identifying outperforming stocks. Indiscriminately seeking the output of as many experts as possible is a waste of time. Learning what makes a good expert accurate is worthwhile.

Neural Network Software Valuation of Fine Art

Given the uniqueness of fine art objects and uncertainties in demand (at auctions), can investors in paintings get accurate estimates of market values of holdings and potential acquisitions? In their March 2019 paper entitled “Machines and Masterpieces: Predicting Prices in the Art Auction Market”, Mathieu Aubry, Roman Kräussl, Gustavo Manso and Christophe Spaenjers compares accuracies of value estimates for paintings based on: (1) a linear hedonic regression (factor model), (2) neural network software and (3) auction houses. For the first two, they employ 985,188 auctions of paintings during 2008–2014 for in-sample training and 104,404 auctions of paintings during the first half of 2015 for out-of-sample testing. Neural network software inputs include information about artists and paintings (year of creation, materials, size, title and markings), and images of the paintings. Using information about artists/paintings and images and auction house estimates and sales prices for the specified 1,089,592 paintings by about 125,000 artists offered through 372 auction houses during January 2008 through June 2015, they find that:

Keep Reading

Cautions Regarding Findings Include…

What are common cautions regarding exploitation of academic and practitioner papers on financial markets? To investigate, we collect, collate and summarize our cautions on findings from papers reviewed over the past year. These papers are survivors of screening for relevance to investors of a much larger number of papers, mostly from the Financial Economics Network (FEN) Subject Matter eJournals and Journal of Economic Literature (JEL) Code G1 sections of the Social Sciences Research Network (SSRN). Based on review of cautions in 109 summaries of papers relevant to investors posted during mid-March 2018 through mid-March 2019, we conclude that: Keep Reading

Equity Factor Census

Should investors trust academic equity factor research? In their February 2019 paper entitled “A Census of the Factor Zoo”, Campbell Harvey and Yan Liu announce a comprehensive database of hundreds of equity factors from top academic journals and working papers through January 2019, including a link to citation and download information. They distinguish among six types of common factors and five types of firm characteristic-based factors. They also explore incentives for factor discovery and reasons why many factors are lucky findings that exaggerate expectations and disappoint in live trading. Finally, they announce a project that allows researchers to add published and working papers to the database. Based on their census of published factors and analysis of implications, they conclude that: Keep Reading

Mutual Fund Investors Irrationally Naive?

Do retail investors rationally account for risks as modeled in academic research when choosing actively managed equity mutual funds? In their March 2019 paper entitled “What Do Mutual Fund Investors Really Care About?”, Itzhak Ben-David, Jiacui Li, Andrea Rossi and Yang Song investigate whether simple, well-known signals explain active mutual fund investor behavior better than academic asset pricing models. Specifically, they compare abilities of Morningstar’s star ratings and recent returns versus formal pricing models to predict net fund flows. They consider the Capital Asset Pricing Model (CAPM) and alphas calculated with 1-factor (or market-adjusted), 3-factor (plus size and book-to-market) and 4-factor (plus momentum) models of stock returns. They consider degree of agreement between signals for a fund (such as number of Morningstar stars and sign of a factor model alpha) and the sign of net capital flow for that fund. They also analyze spreads between net flows to top and bottom funds ranked according to Morningstar stars and fund alphas, taking the number of 5-star and 1-star funds to determine the number of top-ranked and bottom-ranked funds, respectively. Using monthly returns and Morningstar ratings for 3,432 actively managed U.S. equity mutual funds and contemporaneous market, size, book-to-market and momentum factor returns during January 1991 through December 2011 (to match prior research), they find that:

Keep Reading

Alternative Beta Live

Have long-short alternative beta (style premium) strategies worked well in practice? In their February 2019 paper entitled “A Decade of Alternative Beta”, Antti Suhonen and Matthias Lennkh use actual performance data to assess alternative beta strategies across asset classes from the end of 2007 through the end of 2017, including quantification of fees and potential survivorship bias in public data. Specifically, they form three equal volatility weighted (risk parity) composite portfolios of strategies at the ends of each year during 2007-2016, 2007-2011 and 2012-2016. Each portfolio includes all the strategies launched during the first year and then adds strategies launched each following year at the end of that year. When a strategy dies (is discontinued by the offeror), they reallocate its weight to surviving strategies within the portfolio. They also create two additional portfolios for each period/subperiod that segregate equities and non-equities. They further evaluate alternative beta strategy diversification benefits by comparing them to conventional asset class portfolios. Using weekly post-launch excess returns in U.S. dollars for 349 reasonably unique live and dead alternative beta strategies offered by 17 global investment banks, spanning 14 styles and having at least one year of history during 2008 through 2017, they find that:

Keep Reading

Machine Learning Factor?

What are potential monthly returns and alphas from applying machine learning to pick stocks? In their February 2019 paper entitled “Machine Learning for Stock Selection”, Keywan Rasekhschaffe and Robert Jones summarize basic concepts of machine leaning and apply them to select stocks from U.S. and non-U.S. samples, focusing on the cross-section of returns (as in equity factor studies). To alleviate overfitting in an environment with low signal-to-noise ratios, they highlight use of: (1) data feature engineering, and (2) combining outputs from different machine learning algorithms and training sets. Feature engineering applies market/machine learning knowledge to select the forecast variable, algorithms likely to be effective, training sets likely to be informative, factors likely to be informative and factor standardization approach. Their example employs an initial 10-year training period and then walks forecasts forward monthly (as in most equity factor research) for each stock, as follows:

  • Employ 194 firm/stock input variables.
  • Use three rolling training sets (last 12 months, same calendar month last 10 years and bottom half of performance last 10 years), separately for U.S. and non-U.S. samples.
  • Apply four machine learning algorithms, generating 12 signals (three training sets times four algorithms) for each stock each month, plus a composite signal based on percentile rankings of the 12 signals.
  • Rank stocks into tenths (deciles) based on each signal, which forecasts probability of next-month outperformance/underperformance.
  • Form two hedge portfolios that are long the decile of stocks with the highest expected performance and short the decile with the lowest, one equal-weighted and one risk-weighted (inverse volatility over the past 100 trading days), with a 2-day lag between forecast and portfolio reformation to accommodate execution.
  • Calculate gross and net average excess (relative to U.S. Treasury bill yield) returns and 4-factor (market, size, book-to-market, momentum) alphas for the portfolios. To estimate net performance, they assume 0.3% round trip trading frictions. 

They consider two benchmark portfolios that pick long and short side using non-machine learning methods. Using a broad sample of small, medium and large stocks (average 5,907 per month) spanning 22 developed markets, and contemporaneous values for the 194 input variables, during January 1994 through December 2016, they find that: Keep Reading

Sloppy Selling of Expert Traders?

Do expert investors (institutional stock portfolio managers) add value both by buying future outperforming stocks and by selling future underperforming stocks? In their December 2018 paper entitled “Selling Fast and Buying Slow: Heuristics and Trading Performance of Institutional Investors”, Klakow Akepanidtaworn, Rick Di Mascio, Alex Imas and Lawrence Schmidt examine trade decisions of experienced institutional (e.g., pension fund) stock portfolio managers to determine whether they buy and sell shrewdly. In their main tests, they evaluate: (1) positions added versus randomly buying more shares of some stock already in the portfolio: and, (2) positions liquidated versus randomly selling some other holding that was not traded on that date. Using data for 783 portfolios involving 4.4 million trades (2.0 million sells and 2.4 million buys), and prices for assets held and traded in U.S. dollars, during January 2000 through March 2016, they find that:

Keep Reading

Stopping Tests after Lucky Streaks?

Might purveyors of trading strategies be presenting performance results biased by stopping them when falsely successful? In other words, might they be choosing lucky closing conditions for reported positions? In the December 2018 revision of their paper entitled “p-Hacking and False Discovery in A/B Testing”, Ron Berman, Leonid Pekelis, Aisling Scott and Christophe Van den Bulte investigate whether online A/B experimenters bias results by stopping monitored commercial (marketing) experiments based on latest p-value. They hypothesize that such a practice may exist due to: (1) poor training in statistics; (2) self-deception motivated by desire for success; or, (3) deliberate deception for selling purposes. They employ regression discontinuity analysis to estimate whether reaching a particular p-value causes experimenters to end their tests. Using data from 2,101 online A/B experiments with daily tracking of results during 2014, they find that:

Keep Reading

Should the “Anxious Index” Make Investors Anxious?

Since 1990, the Federal Reserve Bank of Philadelphia has conducted a quarterly Survey of Professional Forecasters. The American Statistical Association and the National Bureau of Economic Research conducted the survey from 1968-1989. Among other things, the survey solicits from economic experts probabilities of U.S. economic recession (negative GDP growth) during each of the next four quarters. The survey report release schedule is mid-quarter. For example, the release date of the fourth quarter 2018 report is November 13, 2018, with forecasts for the four quarters of 2019. The “Anxious Index” is the probability of recession during the next quarter. Are these forecasts meaningful for future U.S. stock market returns? Rather than relate the probability of recession to stock market returns, we instead relate one minus the probability of recession (the probability of good times). If forecasts are accurate, a relatively high (low) forecasted probability of good times should indicate a relatively strong (weak) stock market. Using survey results and quarterly S&P 500 Index levels (on survey release dates as available, and mid-quarter before availability of release dates) from the fourth quarter of 1968 through the fourth quarter of 2018 (201 surveys), we find that:

Keep Reading

Online, Real-time Test of AI Stock Picking?

Will equity funds “managed” by artificial intelligence (AI) outperform human investors? To investigate, we consider the performance of AI Powered Equity ETF (AIEQ), which “seeks to provide investment results that exceed broad U.S. Equity benchmark indices at equivalent levels of volatility.” More specifically, offeror EquBot: “…leverages IBM’s Watson AI to conduct an objective, fundamental analysis of U.S.-listed common stocks and real estate investment trusts…based on up to ten years of historical data and apply that analysis to recent economic and news data. Each day, the EquBot Model ranks each company based on the probability of the company benefiting from current economic conditions, trends, and world events and identifies approximately 30 to 70 companies with the greatest potential over the next twelve months for appreciation and their corresponding weights, while maintaining volatility…comparable to the broader U.S. equity market. The Fund may invest in the securities of companies of any market capitalization. The EquBot model recommends a weight for each company based on its potential for appreciation and correlation to the other companies in the Fund’s portfolio. The EquBot model limits the weight of any individual company to 10%.” We use SPDR S&P 500 (SPY) as a simple benchmark for AIEQ performance. Using daily dividend-adjusted closes of AIEQ and SPY from AIEQ inception (October 18, 2017) through December 2018, we find that: Keep Reading

Daily Email Updates
Login
Research Categories
Recent Research
Popular Posts