Objective research to aid investing decisions

Value Investing Strategy (Strategy Overview)

Allocations for December 2024 (Final)
Cash TLT LQD SPY

Momentum Investing Strategy (Strategy Overview)

Allocations for December 2024 (Final)
1st ETF 2nd ETF 3rd ETF

Investing Expertise

Can analysts, experts and gurus really give you an investing/trading edge? Should you track the advice of as many as possible? Are there ways to tell good ones from bad ones? Recent research indicates that the average “expert” has little to offer individual investors/traders. Finding exceptional advisers is no easier than identifying outperforming stocks. Indiscriminately seeking the output of as many experts as possible is a waste of time. Learning what makes a good expert accurate is worthwhile.

Blending AI Stock Picking and Conventional Portfolio Optimization

Should investors trust artificial intelligence (AI) models such as ChatGPT to pick stocks? In their August 2023 paper entitled “ChatGPT-based Investment Portfolio Selection”, Oleksandr Romanko, Akhilesh Narayan and Roy Kwon explore use of ChatGPT to recommend 15, 30 or 45 S&P 500 stocks, with portfolio weights, based on textual sentiment as available to Chat GPT via web content up to September 2021. For robustness, they ask ChatGPT to repeat recommendations for each portfolios 30 times and select the 15, 30 or 45 most frequently recommended stocks for respective portfolios. They then test out-of-sample performance of the following five implementations of each portfolio during September 2021 to July 2023, mid-March 2023 to July 2023, and May 2023 to July 2023:

  1. ChatGPT picks and ChatGPT weights.
  2. ChatGPT picks weighted equally.
  3. ChatGPT picks weighted based on minimum variance (Min Var) weights from a 5-year rolling weekly history.
  4. ChatGPT picks weighted based on maximum return (Max Ret) weights from a 5-year rolling weekly history.
  5. ChatGPT picks weighted based on maximum Sharpe ratio (Max Sharpe) weights from a 5-year rolling weekly history.

For benchmarking, they consider:

  • Long-only portfolios that incorporate all possible combinations of 15, 30 or 45 S&P 500 stocks weighted as above for Min Var, Vax Ret or Max Sharpe.
  • The S&P 500 Index, Dow Jones Industrial Average and the NASDAQ Index.
  • Average performance of 13 popular equity funds.

Using weekly data as specified up to September 2021 for training and subsequent weekly data through June 2023 for out-of-sample testing, they find that:

Keep Reading

Machine Stock Return Forecast Disagreement and Future Return

Is dispersion of stock return forecasts from different machine learning models trained on the same history (as a proxy for variation in human beliefs) a useful predictor of stock returns? In their August 2023 paper entitled “Machine Forecast Disagreement”, Turan Bali, Bryan Kelly, Mathis Moerke and Jamil Rahman relate dispersion in 100 monthly stock return predictions for each stock generated by randomly varied versions of a machine learning model applied to 130 firm/stock characteristics. They measure machine return forecast dispersion for each stock as the standard deviation of predicted returns. They then each month sort stocks into tenths (deciles) based on this dispersion, form either a value-weighted or an equal-weighted portfolio for each decile and compute average next-month portfolio return. Their key metric is average next-month return for a hedge portfolio that is each month long (short) the stocks in the lowest (highest) decile of machine return forecast dispersions. Using the 130 monthly firm/stock characteristics and associated monthly stock returns for a broad sample of U.S. common stocks (excluding financial and utilities firms and stocks trading below $5) during July 1966 through December 2022, they find that:

Keep Reading

Use Analyst Target Price Forecasts to Rank Stocks?

While prior research indicates that analyst forecasts of future stock returns are substantially biased upward, might the relative rankings of return forecasts be informative? In their June 2023 paper entitled “Analysts Are Good at Ranking Stocks”, Adam Farago, Erik Hjalmarsson and Ming Zeng apply within-analyst 12-month stock price targets to rank stocks in two ways:

  1. Average Demeaned Return – each month, demean the returns implied by target prices from an analyst by subtracting from each return the average forecasted return for that analyst. Then, average the demeaned returns for a given stock across all analysts.
  2. Average Ranking – each month, rank stocks by forecasted return for each analyst. Then, average the rankings for a given stock across all analysts covering that stock.

Both approaches remove the upward biases observed in raw target prices. To test analyst forecast informativeness, they then form hedge portfolios that are each month long (short) the equal-weighted or value-weighted fifths, or quintiles, of stocks with the highest (lowest) demeaned returns or rankings that month. Using 12-month target prices for each analyst who issues targets for at least three stocks during a month and associated monthly firm characteristics and stock prices during March 1999 through December 2021, they find that:

Keep Reading

Survey of Use of Machine Learning in Finance

What is the state of machine learning in finance? In their July 2023 paper entitled “Financial Machine Learning”, Bryan Kelly and Dacheng Xiu survey studies on the use of machine learning in finance to further its reputation as an indispensable tool for understanding financial markets. They focus on the use of machine learning for statistical forecasting, covering regularization methods that mitigate overfitting and efficient algorithms for screening a vast number of potential model specifications. They emphasize areas that have received the most attention to date, including return prediction, factor models of risk and return, stochastic discount factors and portfolio choice. Based on the body of machine learning research in finance, they conclude that: Keep Reading

GPT-4 as Financial Advisor

Can state-of-the-art artificial intelligence (AI) applications such as GPT-4, trained on the text of billions of web documents, provide sound financial advice? In their June 2023 paper entitled “Using GPT-4 for Financial Advice”, Christian Fieberg, Lars Hornuf and David Streich test the ability of GPT-4 to provide suitable portfolio allocations for four investor profiles: 30 years old with a 40-year investment horizon, with either high or low risk tolerance; and, 60 years old with a 5-year investment horizon, with either high or low risk tolerance. As benchmarks, they obtain portfolio allocations for identical investor profiles from the robo-advisor of an established U.S.-based financial advisory firm. Recommended portfolios include domestic (U.S.), non-U.S. developed and emerging markets stocks and fixed income, alternative assets (such as real estate and commodities) and cash. To quantify portfolio performance, they calculate average monthly gross return, monthly return volatility and annualized gross Sharpe ratios for all portfolios. Using GPT-4 and robo-advisor recommendations and monthly returns for recommended assets during December 2016 through May 2023 (limited by availability of data for all recommended assets), they find that:

Keep Reading

Best Stock Return Horizon for Machine Learning Models?

Researchers applying machine learning to predict stock returns typically train their models on next-month returns, implicitly generating high turnover that negates gross outperformance. Does training such models on longer-term returns (with lower implicit turnovers) work better? In their June 2023 paper entitled “The Term Structure of Machine Learning Alpha”, David Blitz, Matthias Hanauer, Tobias Hoogteijling and Clint Howard explore how a set of linear and non-linear machine learning strategies trained separately at several prediction horizons perform before and after portfolio reformation frictions. Elements of their methodology are:

  • They consider four representative machine learning models encompassing ordinary least squares, elastic net, gradient boosted regression trees and 3-layer deep neural network, plus a simple average ensemble of these four models.
  • Initially, they use the first 18 years of their sample (March 1957 to December 1974) for model training and the next 12 years (January 1975 to December 1986) for validation. Each December, they retrain with the training sample expanded by one year and the validation sample rolled forward one year.
  • Each month they rank all publicly listed U.S. stocks above the 20th percentile of NYSE market capitalizations (to avoid illiquid small stocks) between −1 and +1 based on each of 206 firm/stock characteristics, with higher rankings corresponding to higher expected returns in excess of the U.S. Treasury bill yield, separately at each of four prediction horizons (1, 3, 6 and 12 months).
  • For each prediction horizon each month, they sort stocks into tenths (deciles) from highest to lowest predicted excess return and reform value-weighted decile portfolios. They then compute next-month excess returns for all ten decile portfolios.
  • They consider a naive hedge portfolio for each prediction horizon that is long (short) the top (bottom) decile. To suppress turnover costs, they also consider an efficient portfolio reformation approach that is long (short) stocks currently in the top (bottom) decile, plus stocks selected in previous months still in the top (bottom) 50% of stocks. 

Using the data specified above during March 1957 through December 2021 and assuming constant 0.25% 1-way turnover frictions, they find that:

Keep Reading

When AIs Generate Their Own Training Data

What happens as more and more web-scraped training data for Large Language Models (LLM), such as ChatGPT, derives from outputs of predecessor LLMs? In their May 2023 paper entitled “The Curse of Recursion: Training on Generated Data Makes Models Forget”, Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot and Ross Anderson investigate changes in LLM outputs as training data becomes increasingly LLM-generated. Based on simulations of this potential trend, they find that: Keep Reading

ChatGPT News-based Forecasts of Stock Market Returns

Are the latest forms of artificial intelligence (AI) better at forecasting stock market returns than humans? In his February 2023 preliminary paper entitled “Surveying Generative AI’s Economic Expectations”, Leland Bybee summarizes results of monthly and quarterly forecasts by a large language model (ChatGPT-3.5) of U.S. stock market returns and 13 economic variables based on samples of Wall Street Journal (WSJ) news articles. He uses the S&P 500 Index as a proxy for the U.S. stock market. He asks ChatGPT to provide reasons for responses. He compares accuracy of ChatGPT forecasts to those from: (1) surveys of humans, including the Survey of Professional Forecasters, the American Association of Individual Investors (AAII) and the Duke CFO Survey; and, (2) a wide range of fundamental and economic predictors tested in past research. Using monthly samples of 300 randomly selected WSJ news articles, results of human surveys and various fundamental/economic data during 1984 through 2021, he finds that:

Keep Reading

Vanguard or Fidelity? Active or Passive?

Should investors in low-cost mutual funds consider active ones? In his April 2023 paper entitled “Vanguard and Fidelity Domestic Active Stock Funds: Both Beat their Style Mimicking Vanguard Index Funds, & Vanguard Beats by More”, Edward Tower compares returns of active Vanguard and Fidelity stock mutual funds to those of style-mimicking portfolios of Vanguard index funds. He segments active funds into three groups: U.S. diversified, sector/specialty and global/international. For U.S. diversified funds, for which samples are relatively large, he regresses monthly net returns of each active fund versus monthly net returns of Vanguard index funds to construct an index fund portfolio that duplicates the active fund return pattern (style). For sector/specialty and global/international segments, for which samples are small, he instead compares active fund net returns to those for respective benchmarks. He uses Vanguard Admiral class funds when available, and Investor class funds otherwise. He applies monthly rebalancing for all fund portfolios. Using fund descriptions and monthly net returns during January 2013 through March 2023, he finds that:

Keep Reading

Stocktwits Tweeters as Investing Experts

Are there clearly skilled and unskilled stock-picking influencers on social media platforms such as StockTwits? If so, do investor reactions to such influencers drive out the unskilled ones? In their March 2023 paper entitled “Finfluencers”, Ali Kakhbod, Seyed Kazempour. Dmitry Livdan and Norman Schuerhoff examine skillfulness, influence and survival of StockTwits tweeters who have followers. They apply four skill metrics to measure stock-picking skill levels of these influencers to identify those who are: (1) skilled (reliably good advice); (2) unskilled; and, (3) anti-skilled (reliably bad advice). They calculate future (1 to 20 days) abnormal returns for each influencer by comparing factor model-adjusted returns (alphas) of associated stock picks before and after recommendation dates. To assess skill persistence, they compare influencer skill levels in the first and second halves of the sample. Using tweet-level and follower data from StockTwits for 29,477 influencers, matched daily stock returns and daily equity factor returns during July 13, 2013 through January 1, 2017, they find that:

Keep Reading

Login
Daily Email Updates
Filter Research
  • Research Categories (select one or more)