...
AI-Driven Decision Making
This study evaluates whether LLM-driven persona simulations can predict post-launch outcomes and, therefore, help teams kill weak ideas early. Using standardized inputs (rich personas, structured service scenarios) and fixed axes for value and convenience, three outputs are generated: a Persona–Service Preference Matrix, a Brand Perceptual Map (Value × Convenience), and a Churn–Elasticity Matrix. These simulation results are compared with ex-post logs via RMSE, MAE, and Spearman rank correlation. Findings show that LLMs align with directional trends and rankings but systematically overestimate levels and compress variance. Consistency improves through multi-model/seed ensembles, tighter scenario specificity, outlier control, and post-hoc calibration using real logs. The paper proposes an operational loop—LLM draft → log-based calibration → operational forecasting—plus governance practices (quarterly refresh, holdouts, transferability checks). LLMs are positioned as an auxiliary inference layer that accelerates experiment design, not a substitute for human data.