The study that just won’t die: disfluency edition
From a new article in HBR on using behavioural science when rolling out AI tools:
Consider building an AI transcription tool. It would be reasonable for designers to assume that the most seamless interface is always best. But behavioral research shows that intentionally adding a little friction—e.g., displaying words in harder-to-read font—actually helps people scrutinize the text more closely, which helps them find and correct errors.
This classic paper in this space is by Adam Alter and friends.1 In Experiment 1, they report on an experiment where they gave student volunteers the cognitive reflection test, which comprises questions such as:
A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball.
How much does the ball cost? _____ cents
The questions typically have an intuitive, wrong answer and a correct answer that requires a bit more thought.
Alter and friends split the experimental participants into two conditions. Some participants were given the questions in an easy-to-read font, others in a hard-to-read font. And as suggested in the HBR article, those who were given the questions in the disfluent font answer more correctly. 65% of participants in the disfluent group got all questions correct, compared to only 10% in the fluent condition.
An example study
I’ve been beating this drum for almost 10 years now. And sadly it still needs beating.
Footnotes
This wasn’t the paper linked from the HBR article. The link is to a review on disfluency by Alter and Daniel M. Oppenheimer. Within that review, the claim about hard to read fonts comes from the Alter et al. paper I have just discussed.↩︎