User Experience Research and Strength of Evidence

User Experience research is about observing what people do. It’s not about canvassing people’s opinions. This is because, as data, opinions are worthless. For every 10 people who like your design 10 others will hate it and 10 more won’t care one way or the other. Opinions are not evidence.
Behaviours, on the other hand, are evidence. This is why a detective would much rather catch someone ‘red-handed’ in the act of committing a crime than depend on hearsay and supposition. Hence the often-repeated advice: “Pay attention to what people do, not to what they say.” It’s almost become a UX cliché but it’s a good starting point for a discussion about something important: strength of evidence.

In the analysis of almost every user research session I've seen, at least someone has said: "oh [the user] really didn't/did like that". If I was being charitable I'd say this is a problem of terminology - people use 'like' when they mean 'positive experience' - but I think more often it's a fundamental misunderstanding of why we're doing research at all.

Based on Hodgson's post, it sounds like that misunderstanding is a common one:

Some years ago while working for a large corporation I was preparing a usability test when the project manager called and asked me to send over the list of usability questions.
“There are no questions in usability,” I replied.
“What do you mean?” she asked, “How can there be no questions? How are you going to find out if people like our new design?”
“But I’m not trying to find out if they like it,” I pointed out in a manner that, in hindsight, seems unnecessarily stroppy, “I’m trying to find out if they can use it. I have a list of tasks not a list of questions.