Objective research to aid investing decisions

Value Investing Strategy (Strategy Overview)

Allocations for February 2026 (Final)
Cash TLT LQD SPY

Momentum Investing Strategy (Strategy Overview)

Allocations for February 2026 (Final)
1st ETF 2nd ETF 3rd ETF

Investing Expertise

Can analysts, experts and gurus really give you an investing/trading edge? Should you track the advice of as many as possible? Are there ways to tell good ones from bad ones? Recent research indicates that the average “expert” has little to offer individual investors/traders. Finding exceptional advisers is no easier than identifying outperforming stocks. Indiscriminately seeking the output of as many experts as possible is a waste of time. Learning what makes a good expert accurate is worthwhile.

Great Stock Picks from Forbes?

Do “great stock picks” from Forbes beat the market? To investigate, we evaluate stock picks for 2022, 2023, 2024 and via  “10 Great Stock Picks for 2022 from Top-Performing Fund Managers”, “20 Great Stock Ideas for 2023 from Top-Performing Fund Managers”, “10 Best Stocks For 2024” and “The Best Stocks To Buy Now For 2026”. For each year and each stock, we compute total (dividend-adjusted) return. For each year, we then compare the average (equal-weighted) total return for a Forbes picks portfolio to that of SPDR S&P 500 ETF Trust (SPY). Using end-of-year dividend adjusted prices from Yahoo!Finance for the specified years/stocks through 2025, we find that: Keep Reading

Performance of Barron’s Annual Top 10 Stocks

Each year in December, Barron’s publishes its list of the best 10 stocks for the next year. Do these picks on average beat the market? To investigate, we scrape the web to find these lists for years 2011 through 2026, calculate the associated calendar year total return for each stock and calculate the average return for the 10 stocks for each year. We use SPDR S&P 500 ETF Trust (SPY) as a benchmark for these averages. We source most stock prices from Yahoo!Finance, but also use Historical Stock Price.com for a few stocks no longer tracked there. Using year-end dividend-adjusted stock prices for the specified stocks-years during 2010 through 2025, we find that: Keep Reading

AIs Only Human?

Having been trained by humans on human information, do Large Language Models (LLM) behave like human investors? In their January 2026 paper entitled “Artificially Biased Intelligence: Does AI Think Like a Human Investor?”, Javad Keshavarz, Cayman Seagraves and Stace Sirmans investigate whether 48 widely used LLMs exhibit any of 11 known cognitive biases in financial decision-making. They speculate that LLMs acquire biases via human-authored training data, statistical learning and responses that reward perceived helpfulness over logical consistency. Specifically, they test whether:

  • LLMs reliably exhibit economically meaningful cognitive biases in financial decision problems.
  • Any biases vary across LLMs with different levels of intelligence.
  • Users can intervene to suppress any biases in real-time LLM use.

Their prompt-pair methodology ensures that findings are causal rather than just correlational. Using 25 prompt-pairs per each of 11 biases across 48 LLMs, they find that: Keep Reading

Research Polluted by Biased TAQ Data?

A Securities Information Processor (SIP) aggregates quotes and trades from all U.S. stock exchanges to feed the NYSE Trade and Quote (TAQ) database, used in much finance research to (for example) estimate effective bid-ask spreads and associated trading frictions. Is this database trustworthy? In their December 2025 paper entitled “Latency and the Look-Ahead Bias in Trade and Quote Data”, Robert Battalio, Craig Holden, Matthew Pierson, John  Shim and Jun Wu investigate the reliability of TAQ data, with focus on the arrival times of data with different latencies (delays) as compared to the assuredly ordered NYSE Arca Direct Feed Data. Using timestamped NYSE Daily TAQ data and NYSE Arca Direct Feed Data for the month of June 2019, they find that: Keep Reading

Making Equity Factor Models Meaningful?

Most researchers use classical statistical testing, with a t-statistic of 2.0 as the significance threshold for accepting an hypothesis. However, this threshold is valid only if the associated p-value derives from a single test. There are hundreds of published factor tests and an unknown number of unpublished tests. How far should researchers raise the significance threshold to account for multiple hypothesis testing? In their December 2025 paper entitled “What Threshold Should be Applied to Tests of Factor Models?”, Campbell Harvey, Alessio Sancetta and Yuqian Zhao address this issue by:

  1. Clarifying applicable statistical methods, including how to measure the probability that the null hypothesis is true and insight on the False Discovery Rate (FDR), without knowing the number of tests.
  2. Reconciling existing results in the literature.
  3. Providing guidance on the threshold for deciding statistical significance.

They also discuss the plausibility of the assumptions embedded in their approach. Based on mathematical analysis in the context of financial research, they find that: Keep Reading

Realistic Machine Learning Stock Portfolio Performance

Prior research suggests that machine learning factor models of the cross section of stock returns greatly enhance portfolio performance by: (1) expanding the dataset to include more variables; and, (2) allowing more complex (non-linear) variable interactions. Does this finding hold up in a realistic portfolio management scenario? In their November 2025 paper entitled “What Drives the Performance of Machine Learning Factor Strategies?”, Mikheil Esakia and Felix Goltz decompose performance contributions from these two enhancements in scenarios ranging from ideal to realistic. The ideal scenario, found in much machine learning research, ignores portfolio management constraints. The realistic scenario excludes microcaps, removes look-ahead bias for yet-to-be-published factors and accounts for trading frictions. They further look at exclusion of shorting. They estimate trading frictions as half the monthly effective bid-ask spread (daily average of closing quoted spreads). Using daily and monthly data for publicly listed U.S. common stocks and monthly data for 94 firm-level characteristics as available during June 1963 and through December 2021, they find that: Keep Reading

How to Use AI in Research?

How should researchers apply and restrict artificial intelligence (AI) in research? In the December 2025 revision of their editorial entitled “The Use of AI in Academic Research”, Gordon Graham and Jennifer Tucker share experiences as accounting journal editors in dealing with this question. They review the meaning and capabilities of AI. They address the extent to which AI can perform the tasks involved in production of academic research, including pros, cons and unintended consequences. Based on their experiences, they conclude that: Keep Reading

Grok Sentiment Index?

Can Grok extract a useful weekly U.S. stock market sentiment metric from posts on X? To investigate, we ask Grok to each week for two years aggregate weekly U.S. stock market sentiment looking for at least 50 posts per week (ending Saturdays) and weighting each post sentiment according to its audience engagement (influence). For example, the Grok Sentiment for 2025-11-29 encompasses posts from 2025-11-23 through 2025-11-29. We then relate the resulting aggregate sentiment values and change in these values to S&P 500 Index (SP500) returns from the first open after measurement (usually the Monday open) to the close before the next measurement (usually the Friday close). Using the specified weekly inputs we find that: Keep Reading

Fundamental Retail Investors Beat Technical?

Can a large language model (LLM) applied to social media data catalog the strategy choices, sentiment and trading behavior of retail investors? In the November 2025 revision of their paper entitled “Wisdom or Whims? Decoding Retail Strategies with Social Media and AI”, Shuaiyu Chen, Lin Peng and Dexin Zhou apply GPT-4 Turbo and BERT to StockTwits messages to classify retail investor strategies as: (1) technical analysis (TA); (2) fundamental analysis (FA); (3) other strategies (such as options trading); or, (4) no strategy. They then relate strategy classes to future stock returns and trading activity. Using StockTwits messages posted by 840,846 investors on 7,834 common stocks and associated accounting, price, trade order and financial news during January 2010 through June 2023, they find that: Keep Reading

How Are AI-powered ETFs Doing?

How do exchange-traded-funds (ETF) that employ artificial intelligence (AI) to pick assets perform? To investigate, we consider ten such ETFs, eight of which are currently available:

We use SPDR S&P 500 ETF Trust (SPY) for comparison, though it is not conceptually matched to some of the ETFs. We focus on monthly return statistics, along with compound annual growth rates (CAGR) and maximum drawdowns (MaxDD). Using monthly total returns for the ten AI-powered ETFs and SPY as available through October 2025, we find that: Keep Reading

Research Finder

Search 1,200+ research articles

  • Research Categories (select one or more)