Having been trained by humans on human information, do Large Language Models (LLM) behave like human investors? In their January 2026 paper entitled "Artificially Biased Intelligence: Does AI Think Like a Human Investor?", Javad Keshavarz, Cayman Seagraves and Stace Sirmans investigate whether 48 widely used LLMs exhibit any of 11 known cognitive biases in financial decision-making. They speculate that LLMs acquire biases via human-authored training data, statistical learning and responses that reward perceived helpfulness over logical consistency. Specifically, they test whether:
- LLMs reliably exhibit economically meaningful cognitive biases in financial decision problems.
- Any biases vary across LLMs with different levels of intelligence.
- Users can intervene to suppress any biases in real-time LLM use.
Their prompt-pair methodology ensures that findings are causal rather than just correlational. Using 25 prompt-pairs per each of 11 biases across 48 LLMs, they find that:
Subscribe to Keep Reading
Get the research edge serious investors rely on.
- 1,200+ research articles
- Monthly strategy signals
- 20+ years of backtested analysis
Cancel anytime