Objective research to aid investing decisions

Value Investing Strategy (Strategy Overview)

Allocations for April 2024 (Final)
Cash TLT LQD SPY

Momentum Investing Strategy (Strategy Overview)

Allocations for April 2024 (Final)
1st ETF 2nd ETF 3rd ETF

Investing Expertise

Can analysts, experts and gurus really give you an investing/trading edge? Should you track the advice of as many as possible? Are there ways to tell good ones from bad ones? Recent research indicates that the average “expert” has little to offer individual investors/traders. Finding exceptional advisers is no easier than identifying outperforming stocks. Indiscriminately seeking the output of as many experts as possible is a waste of time. Learning what makes a good expert accurate is worthwhile.

Profitable Machine Learning Stock Picking Strategies?

Can machine learning models pick stocks that unequivocally generate alpha out-of-sample? In their November 2023 paper entitled “The Expected Returns on Machine-Learning Strategies”, Vitor Azevedo, Christopher Hoegner and Mihail Velikov assess expected net returns and alphas of machine learning-based anomaly trading strategies. They use nine machine learning models to predict next-month stock returns based on inputs for up to 320 published anomalies, added to the mix according to respective publication dates:

They train the models using an expanding window, with the last seven years reserved for six years of validation and one year of out-of-sample-testing. During the test year, they each month reform a portfolio that is long (short) the value-weighted tenth, or decile, of stocks with the highest (lowest) predicted next-month returns. They then calculate actual next-month gross returns and 6-factor (market, size, value, profitability, investment and momentum) alphas during the test year. To calculate net returns and alphas, they multiply trading frictions estimated from historical bid-ask spreads times monthly portfolio turnovers. Using returns and firm characteristics for a broad sample of U.S. common stocks having data covering at least 20% of the 320 anomalies during March 1957 through December 2021, with out-of-sample tests starting January 2005, they find that:

Keep Reading

Understandable AI Stock Pricing?

Can explainable artificial intelligence (AI) bridge the gap between complex machine learning predictions and economically meaningful interpretations? In their December 2023 paper entitled “Empirical Asset Pricing Using Explainable Artificial Intelligence”, Umit Demirbaga and Yue Xu apply explainable artificial intelligence to extract the drivers of stock return predictions made by four machine learning models: XGBoost, decision tree, K-nearest neighbors and neural networks. They use 209 firm/stock-level characteristics and stock returns, all measured monthly, as machine learning inputs. They use 70% of their data for model training, 15% for validation and 15% for out-of-sample testing. They consider two explainable AI methods:

  1. Local Interpretable Model-agnostic Explanations (LIME) – explains model predictions by approximating the complex model locally with a simpler, more interpretable model.
  2. SHapley Additive exPlanations (SHAP) – uses game theory to determine which stock-level characteristics are most important for predicting returns.

They present a variety of visualizations to help investors understand explainable AI outputs. Using monthly data as described for all listed U.S stocks during March 1957 through December 2022, they find that:

Keep Reading

Causal Discovery Applications in Stock Investing

Can causal discovery algorithms (which look beyond simple statistical association, and instead consider all available data and allow for lead-lag relationships) make economically meaningful contributions to equity investing? In their December 2023 paper entitled “Causal Network Representations in Factor Investing”, Clint Howard, Harald Lohre and Sebastiaan Mudde assess the economic value of a representative score-based causal discovery algorithm via causal network representations of S&P 500 stocks for three investment applications:

  1. Generate causality-based peer groups (e.g., to account for characteristic concentrations) to neutralize potentially confounding effects in long-short equity strategies across a variety of firm/stock characteristics.
  2. Create a centrality factor represented by returns to a portfolio that is each month long (short) peripheral (central) stocks.
  3. Devise a monthly network topology density market timing indicator.

Using daily and monthly data for S&P 500 stocks and monthly returns for widely used equity factors during January 1993 through December 2022, they find that: Keep Reading

Causality in the 5-factor Model of Stock Returns

Does the Fama-French 5-factor model of stock returns stand up to causality analyses? Do the factors cause the returns? In their December 2023 paper entitled “Re-Examination of Fama-French Factor Investing with Causal Inference Method”, Lingyi Gu, Ellen Zhang, Andrew Heinz, Jingxuan Liu, Tianyue Yao, Mohamed AlRemeithi and Zelei Luo construct causal graphs to analyze the relationship between future (next-month) stock return and each of the five factors in the model, which are:

  1. Market – value-weighted market return minus the risk-free rate.
  2. Size – return on small stocks minus the return on big stocks.
  3. Value –  return on high book-to-market ratio stocks minus the return on low book-to-market ratio stocks.
  4. Profitability – return on robust profitability stocks minus the return on weak profitability stocks.
  5. Investment – return on conservative investment stocks minus the return on aggressive investment stocks.

They consider a constraint-based algorithm, a score-based algorithm and a functional model to estimate causality. For each approach, they evaluate the stability and strength of the causal relationships across different conditions by explore robustness to data loss or alterations. Their goal is to replicate initial conditions and datasets used in the 2015 paper that introduced the 5-factor model. Using monthly returns for a broad sample of U.S. common stocks and the five specified factors during July 1963 through December 2013, they find that:

Keep Reading

Inherent Misspecification of Factor Models?

Do linear factor model specification choices inherently produce out-of-sample underperformance of investment strategies seeking to exploit factor premiums? In their January 2024 paper entitled “Why Has Factor Investing Failed?: The Role of Specification Errors”, Marcos Lopez de Prado and Vincent Zoonekynd examine whether standard practices induce factor specification errors and how such errors might explain actual underperformance of popular factor investing strategies. They consider potential effects of confounding variables and colliding variables on factor model out-of-sample performance. Based on logical derivations, they conclude that: Keep Reading

The State of LLM Use in Accounting and Finance

How might Large Language Models (LLM), trained to understand, generate and interact with human language via billions or trillions of tuned parameters, impact accounting and finance? In their December 2023 paper entitled “A Scoping Review of ChatGPT Research in Accounting and Finance”, Mengming Dong, Theophanis Stratopoulos and Victor Wang synthesize recent publications and working papers on ChatGPT and related LLMs to inform practitioners and researchers of the latest developments and uses. They also provide a brief history of LLMs. Based on review of about 200 papers released during January 2022 through October 2023, they conclude that: Keep Reading

Performance of Barron’s Annual Top 10 Stocks

Each year in December, Barron’s publishes its list of the best 10 stocks for the next year. Do these picks on average beat the market? To investigate, we scrape the web to find these lists for years 2011 through 2023, calculate the associated calendar year total return for each stock and calculate the average return for the 10 stocks for each year. We use SPDR S&P 500 ETF Trust (SPY) as a benchmark for these averages. We source most stock prices from Yahoo!Finance, but also use Historical Stock Price.com for a few stocks no longer tracked by Yahoo!Finance. Using year-end dividend-adjusted stock prices for the specified stocks-years during 2010 through 2023, we find that: Keep Reading

Which Predictors Make Machine Learning Work?

With stock portfolio construction increasingly based on “black box” machine learning models with very large numbers of inputs, how can investors decide whether portfolio recommendations make sense? In their November 2023 paper entitled “The Anatomy of Machine Learning-Based Portfolio Performance”, Philippe Coulombe, David Rapach, Christian Montes Schütte and Sander Schwenk-Nebbe describe a way to use Shapley values to estimate contributions of groups of related inputs to machine learning-based portfolio performance. Their approach applies to any fitted prediction model (or ensemble of models) used to forecast asset returns and construct a portfolio based on the forecasts. They illustrate their approach on an XGBoost machine learning model that each month:

  • Uses 207 firm characteristics to forecast next-month returns of associated stocks.
  • Excludes stocks in the bottom 20% of NYSE market capitalizations.
  • Sorts surviving stocks into fifths, or quintiles, based on forecasted returns.
  • Reforms a hedge portfolio that is long (short) the value-weighted top (bottom) quintile.

They then assign each of the 207 inputs to one of 20 groups based on similarities and estimate the contribution of each input group to portfolio performance. Using 207 monthly firm/stock characteristics for all listed U.S. firms and the monthly risk-free rate during January 1960 through December 2021, with portfolio testing commencing January 1973, they find that:

Keep Reading

GPT-4 as Stock Ranker

Can the large language model GPT-4 help investors make investment decisions? In their October 2023 paper entitled “Can ChatGPT Assist in Picking Stocks?”, Matthias Pelster and Joel Val conduct a live test during the 2023 second quarter earnings announcements of the value and timeliness of investment advice from GPT-4 augmented with WebChatGPT for internet access. They ask GPT-4 for two separate series of ratings for each S&P 500 firm over approximately two months:

  1. Considering all available information from news outlets and social media discussions, provide on a scale from -5 to +5 a forecast for the next earnings announcement.
  2. Rate on a scale from -5 to +5 the attractiveness of the stock of each firm over the next month.

They apply these two series to assess the accuracy of GPT-4 earnings forecasts and the response of its stock attractiveness ratings to news. They also measure 30-day future returns of equal-weighted portfolios based on GPT-4 attractiveness ratings, reformed with each ratings update. Using the two series of GPT-4 ratings during July 5, 2023 through  September 8, 2023, they find that: Keep Reading

AI CFAs?

Can large language models (LLM) such as ChatGPT and GPT-4 pass the Chartered Financial Analyst (CFA) exam, which covers fundamentals of investment tools, asset valuation, portfolio management and wealth planning? In their October 2023 paper entitled “Can GPT models be Financial Analysts? An Evaluation of ChatGPT and GPT-4 on Mock CFA Exams”, Ethan Callanan, Amarachi Mbakwe, Antony Papadimitriou, Yulong Pei, Mathieu Sibue, Xiaodan Zhu, Zhiqiang Ma, Xiaomo Liu and Sameena Shah investigate whether ChatGPT and GPT-4 could pass the CFA exam. They ask the models to respond to mock exam questions from the first two of the three levels on the exam:

  • Level I – 180 standalone multiple choice questions (using questions from five mock exams).
  • Level II – 22 vignettes and 88 accompanying multiple choice questions, with a higher proportion requiring interpretation of numerical data and calculations than found in Level I (using questions from two mock exams).
  • Level III – a mix of vignette-related essay questions and vignette-related multiple choice questions (untested due to the difficulty of assessing essay responses).

They assess responses to the Level I and II mock exam questions via three approaches:

  1. Gauging inherent model reasoning abilities without providing any correct examples.
  2. Facilitating model acquisition of new knowledge by providing examples of good responses for either (a) a random sample of questions within level or (b) one question from each exam topic.
  3. Prompting the models to address each question step-by-step and to show their work for calculations.

They then compared responses of the two models to approved answers and estimate whether either could pass based on proficiency thresholds reported by CFA exam takers on Reddit. Using mock CFA Level I and II exam questions and the three test approaches as described above, they find that: Keep Reading

Login
Daily Email Updates
Filter Research
  • Research Categories (select one or more)