Thursday, May 07, 2026

Asking AI Chatbots to Adopt an Expert Persona Doesn't Work


Conventional wisdom suggests that we should ask AI chatbots to adopt an expert persona to elicit better answers. According to this advice, prompts will yield better responses if they include statements such as "Imagine that you are a world class statistician" or "Think like an expert engineer." Some of the major models (such as Claude and ChatGPT) advise users to engage in this type of prompt engineering. Yet, new research suggests that asking the models to behave as an expert does not work.

Savir Basil, Ina Shapiro, Dan Shapiro, Ethan Mollick, Lilach Mollick, and Lennart Meincke have published a report at Wharton titled "Playing Pretend: Expert Personas Don’t Improve Factual Accuracy."  Knowledge@Wharton summarized their findings:

The researchers tested several ways of instructing AI to answer nearly 200 PhD-level questions in one test and a further 300 similarly demanding ones in another. Some prompts framed the model as a subject matter expert, others as a different kind of expert, or as a child or layperson. But the results were consistent.  Expert personas did not lift performance and in most cases were no better than a simple baseline with no persona at all, while less knowledgeable roles often hurt accuracy.  Any gains were small and tied to specific models, not a general pattern, and even matching the persona to the task — using a “physics expert” for physics questions, for example — made little difference.

No comments: