Large language models and humans converge in judging public figures' personalities

PNAS Nexus. 2024 Sep 19;3(10):pgae418. doi: 10.1093/pnasnexus/pgae418. eCollection 2024 Oct.

Abstract

ChatGPT-4 and 600 human raters evaluated 226 public figures' personalities using the Ten-Item Personality Inventory. The correlation between ChatGPT-4 and aggregate human ratings ranged from r = 0.76 to 0.87, outperforming the models specifically trained to make such predictions. Notably, the model was not provided with any training data or feedback on its performance. We discuss the potential explanations and practical implications of ChatGPT-4's ability to mimic human responses accurately.

Keywords: AI; large language models; personality perception; zero-shot predictions.