📝 Research summary: The importance of aligned expertise between AI and its users
A new paper offers a hint that LLMs need more than domain knowledge and ethics: they may also need to master social skills
In our last post, we took a look at a paper exploring the relationship between AI usage and critical thinking. Central to that relationship is the idea of trust: the paper showed evidence that the more someone trusts AI, the less they self-report needing to engage in critical thinking for the tasks they ask AI to do.
The list of things that do or may influence a user’s level of trust in AI is exceedingly long and wildly complex: a person’s overall view of technology (as we’ve seen in other research), their personality type, their perception of the tool they’re using or the company that created it, etc.— all of these things surely play at least some role in that dynamic.
A new paper takes a look at another element of trust: expertise. Expertise is a particularly important dimension of trust to understand, because, as these charts from the paper illustrate, we’re already at a stage in AI’s development where models’ expertise far exceeds that of users in most domains.
Using a sample of 25,000 Bing Copilot conversations, the researchers used models to infer the user’s expertise level, the model’s expertise level, and the user’s satisfaction level. (The method for determining user satisfaction is fascinating and worth exploring in a future post.)
Keep reading with a 7-day free trial
Subscribe to The Understanders to keep reading this post and get 7 days of free access to the full post archives.