Objective research to aid investing decisions

Value Investing Strategy (Strategy Overview)

Allocations for May 2025 (Final)
Cash TLT LQD SPY

Momentum Investing Strategy (Strategy Overview)

Allocations for May 2025 (Final)
1st ETF 2nd ETF 3rd ETF

Investing Expertise

Can analysts, experts and gurus really give you an investing/trading edge? Should you track the advice of as many as possible? Are there ways to tell good ones from bad ones? Recent research indicates that the average “expert” has little to offer individual investors/traders. Finding exceptional advisers is no easier than identifying outperforming stocks. Indiscriminately seeking the output of as many experts as possible is a waste of time. Learning what makes a good expert accurate is worthwhile.

Industry Expert Versus Generalist Financial AIs

Should those aiming to exploit machine learning for portfolio construction focus model training on the broad market or specific industries? In their April 2025 paper entitled “Do Machine Learning Models Need to Be Sector Experts?”, Matthias Hanauer, Amar Soebhag, Marc Stam and Tobias Hoogteijling examine return predictability using several machine learning (ML) models trained on a comprehensive set of firm/stock characteristics in three ways:

  1. Generalist – trained on all stocks in the sample.
  2. Specialist – trained on stocks only within one of 12 industry classifications.
  3. Hybrid – integrates overall sample and industry information via industry-neutral mappings from stock characteristics to expected returns.

They employ four ML models, including elastic nets, gradient boosted regression trees, 3-layer neural networks and an equal-weighted ensemble of the three. They train and tune these models with an expanding window with an initial 18-year training set, 12-year validation set and 1-year test set, shifted forward each year but retaining the initial training start point. Input data consists of monthly stock returns and monthly values of 153 firm-level characteristics for U.S. stocks each month at or above the 20th percentile of NYSE market capitalizations . They assign stocks to the 12 industries (including Other), with average weights ranging from 22.5% for Tech to 1.4% for Durables. They then each month sort stocks into tenths (deciles) by machine learnings ensemble-predicted next-month return and reform a volatility-scaled, value-weighted hedge portfolio that is long the decile with the highest expected returns and short the decile with the lowest. Using the specified inputs during January 1957 (January 1986 for a non-U.S. sample) through December 2023, they find that:

Keep Reading

Evolution of Asset Pricing Approaches

Does the evolution of empirical asset pricing point inevitably to machine learning methods? In his February 2025 paper entitled “From Econometrics to Machine Learning: Transforming Empirical Asset Pricing”, Chuan Shi summarizes the transition from traditional methods to machine learning in empirical asset pricing. He traces the historical development of traditional asset pricing models and their roles as benchmarks for decades of research. He compares the strengths and weaknesses of traditional methods and machine learning, explaining why the latter is well-suited to address challenges of the big data era. Finally, he introduces an approach based on the stochastic discount factor (SDF), melding the simplicity of traditional models and the flexibility/predictive power of machine learning. Based on the body of research on asset pricing, he concludes that: Keep Reading

AIs Changing Markets?

Is the ability of artificial intelligence (AI) platforms such as ChatGPT to summarize and interpret large volumes of financial data altering investor trading behaviors and thereby changing financial markets? In the April 2025 revision of their paper entitled “ChatGPT and the Stock Market”, Jenny Stanco and Kee Chung examine the impact of ChatGPT on stock trading, volatility, liquidity, and price efficiency. For their analysis, they separate firms into those with abundant publicly available information (high-info) and those with limited information (low-info), employing firm size and age as proxies for information availability. They further use Google search volumes to estimate levels of attention firms may get from ChatGPT. They use the year before (after) ChatGPT launch on November 30, 2022 as the pre-launch (post-launch) subperiod. Using daily trading volumes and return volatilities, and earnings forecasts/announcements/actuals data, for a broad sample of U.S. stocks from the end of November 2021 through the end of November 2023, they find that: Keep Reading

Using LLMs to Discover Better Portfolio Performance

Can large language models (LLM) help improve portfolio performance metrics, portfolio optimization and strategy feature discovery? In his three January-February 2025 papers entitled “AlphaSharpe: LLM-Driven Discovery of Robust Risk-Adjusted Metrics”, “AlphaPortfolio: Discovery of Portfolio Optimization and Allocation Methods Using LLMs” and “AlphaQuant: LLM-Driven Automated Robust Feature Engineering for Quantitative Finance”, Kamer Yuksel explores use of specially trained LLMs to discover new:

  • Enhanced portfolio risk-return metrics that outperform traditional approaches such as Sharpe ratio.
  • Better portfolio optimization methods.
  • Robust investment strategy features.

The development processes are iterative for continuous improvement. He assesses usefulness of discoveries with 15 years of historical data for 3,246 US stocks and ETFs, of which he uses the last few years for out-of-sample equal-weighted portfolio testing. Using these methods and this dataset, he finds that:

Keep Reading

How Are Uranium ETFs Doing?

Are plans to use nuclear power to provide electricity for proliferating data centers driving attractive performance for uranium exchange-traded-funds (ETF)? To investigate, we consider four such ETFs, all currently available:

  • VanEck Uranium and Nuclear ETF ETF (NLR) – picks stocks and depositary receipts of firms involved in uranium and nuclear energy.
  • Global X Uranium ETF (URA) – picks stocks of global companies involved in the uranium industry.
  • Sprott Uranium Miners ETF (URNM) – picks stocks of firms devoting at least 50% of assets to mining of uranium, holding physical uranium, owning uranium royalties or engaging in other activities that support uranium mining.
  • Sprott Junior Uranium Miners ETF (URNJ) – picks stocks of small firms devoting at least 50% of assets to mining of uranium, holding physical uranium, owning uranium royalties or engaging in other activities that support uranium mining.

We use Energy Select Sector SPDR Fund (XLE) as a benchmark. We also look at some performance results for SPDR S&P 500 ETF Trust (SPY) for perspective. We focus on monthly return statistics, along with compound annual growth rates (CAGR) and maximum drawdowns (MaxDD). Using monthly total returns for the four uranium ETFs as available and for XLE and SPY through February 2025, we find that: Keep Reading

A Professor’s Stock Picks

Does finance professor David Kass, who presents annual lists of stock picks on Seeking Alpha, make good selections? To investigate, we consider his picks of “10 Stocks for 2020”, “16 Stocks For 2021”, “12 Stocks For 2022”, “10 Stocks For 2023” and “10 Stocks For 2024”. For each year and each stock, we compute total (dividend-adjusted) return. For each year, we then compare the average (equal-weighted) total return for a David Kass portfolio to that of  SPDR S&P 500 ETF Trust (SPY). Using dividend-adjusted returns from Yahoo!Finance for SPY and most stock picks and returns from Barchart.com and Investing.com for three picks during their selection years, we find that: Keep Reading

Mimicking Economic Expertise with LLMs

Can large language models (LLMs) mimic expert economic forecasters? In their December 2024 paper entitled “Simulating the Survey of Professional Forecasters”, Anne Hansen, John Horton, Sophia Kazinnik, Daniela Puzzello and Ali Zarifhonarvar employ a set of LLMs (primarily GPT-4o mini) to simulate economic forecasts of experts who participate in the Survey of Professional Forecasters. Specifically, they:

  1. Provide the LLMs with detailed participant characteristics (demographics, education, job title, affiliated organizations, alma maters, degrees, professional roles, location and social media presence) and then prompt the LLMs to mimic forecaster personas.
  2. Ask each persona to respond to survey questions using real-time economic data and historical survey responses.

They further explore which persona characteristics affect forecast accuracy. They address the issue of potential LLM look-ahead bias by telling the models to use only information available at the time of forecasting. Using the specified forecaster persona and economic/historical forecast data, they find that:

Keep Reading

Making LLMs Better at Financial Reasoning

Can large language models (LLM) handle complex financial reasoning tasks that require multi-step logic, market knowledge and regulatory adherence? In his December 2024 paper entitled “Large Language Models in Finance: Reasoning”, Miquel Noguer I Alonso surveys and extends techniques for enhancing LLM reasoning capabilities. He presents detailed finance-specific coding examples, including dynamic portfolio optimization, scenario stress testing, regulatory compliance analysis and credit risk assessment. He addresses key challenges in scalability, interpretability and bias mitigation. Based on his knowledge and experience with LLMs and other analysis tools, he concludes that:

Keep Reading

Innumeracy and Look-ahead Bias in LLMs?

Recent research in accounting and finance finds that large language models (LLM) beat humans on a variety of related tasks, but the black box nature of LLMs obscures why. Is LLM outperformance real? In his December 2024 paper entitled “Caution Ahead: Numerical Reasoning and Look-ahead Bias in AI Models”, Bradford Levy conducts a series of experiments to open the LLM black box and determine why LLMs appear to perform so well on accounting and finance-related tasks. He focuses on numerical reasoning and look-ahead bias. Based on results of these experiments, he finds that:

Keep Reading

Meta AI Stock Picking Backtest

Do annual stock picks from the Meta AI large language model beat the market? To investigate, we ask Meta AI to pick the top 10 stocks for each of 2020-2024 based on information available only before each year. For example, we ask Meta AI to pick stocks for 2020 as follows:

“Limiting yourself strictly to information that was publicly available by December 31, 2019, what are the 10 best stocks for 2020?”

We then repeat the question for 2021, 2022, 2023 and 2024 stock picks, each time advancing the information restriction to the end of the prior year. For each year and each stock, we compute total (dividend-adjusted) return. For each year, we then compare the average (equal-weighted) total return for a Meta AI picks portfolio to those of  SPDR S&P 500 ETF Trust (SPY) and Invesco QQQ Trust (QQQ). Using end-of-year dividend-adjusted closing prices for SPY, QQQ and each of the specified years/stocks (with all five queries occurring on January 12, 2025) from Yahoo!Finance, we find that:

Keep Reading

Login
Daily Email Updates
Filter Research
  • Research Categories (select one or more)