In a previous post, I looked at trends in critic and audience ratings of movies. One finding was that critics on average rate movies lower than audiences. One potential explanation for this is that there is more selection bias with audience ratings - only people who are an audience likely to enjoy a film will tend to see (and therefore rate) it. In other words, if you asked an audience member or a critic how much enjoyment they would get out of seeing a film they rated a 5, you would get similar responses.
However, an alternative explanation is that when I compare audience and critic responses, I'm comparing apples and oranges. Maybe critics, having made a career of rating things, actually have a better calibrated rating scale. As a population, they use a wider range of values. Under this explanation, a 5 from an audience (around the lowest I saw for any movie) is actually more like a 2 from a critic.
Which interpretation is correct? I originally went with the 'they use the same scale' interpretation, partially because I think it holds merit and partially because it leads to numbers that are easier to digest. A 2 point difference on a 10 point scale is meaningful to most people in a way that 1.5 standard deviations is not. However, it's still interesting to see what happens if we use normalized values.
I executed all of the same code, but with normalized (z-scored) values. For the non-stats savvy people: I've just made it so the range of the ratings are about the same for the audience and critic ratings, and made it so an 'average' rating is a 0. This is easier to see in the first figure (compare to the previous post).
Visualizing Normalized Movie Ratings ¶