Are large language models (LLM) robust financial advisors for individuals? In their March 2026 paper entitled "AI Financial Advice: Supply, Demand, and Life Cycle Implications", Taha Choukhmane, Tim de Silva, Weidong Lin and Matthew Akuzawa examine the personal financial advice from LLMs. They mainly use GPT-5.2 but repeat analyses using Gemini 3 Flash as a robustness check. Specifically, they:
- Construct a life cycle model of income/spending/saving/investment, with labor market shocks and asset returns calibrated to U.S. data.
- Collect questions (prompts) from a demographically representative sample of about 1,000 U.S. adults about spending and investing, including summaries of respective financial situations.
- Simulate life cycle paths of individuals for each year from ages 22 to 90 who follow two-pass advice in LLM responses to prompts from survey participants matched by age, income and employment status. The first pass solicits textual advice, and the second translates text to quantified saving, spending and asset allocation recommendations.
They consider two benchmarks: (1) the optimal behaviors for the life cycle model simulations; and, (2) substitution of survey respondent prompts with expert (academic) prompts that ask the LLM to give professional life cycle advice under modern portfolio theory, including explicit personal situations/economic assumptions. Using the specified life cycle model and LLM prompts, they find that:
Subscribe to Keep Reading
Get the research edge serious investors rely on.
- 1,200+ research articles
- Monthly strategy signals
- 20+ years of backtested analysis
Cancel anytime