When Star Trek hit theaters last month, I wrote that the film was getting “pretty good, though not great” reviews across the board. One of our critics dashed off an e-mail asking, “Do you and I have a different definition of 90-plus percent at Rotten Tomatoes?!”
Well, yes we did. All Rotten Tomatoes tells us is that 90-plus percent of critics liked the movie, but didn’t necessarily love it. Once some analysis was done on the actual ratings at RT, the consensus was much closer to 3-star reviews than 4 – so yeah, pretty good though not great.
Statistics don’t necessarily tell us everything, and the websites that are in the business of compiling a consensus of movie reviews – like Rotten Tomatoes, Metacritic, and the new Movie Review Intelligence – all have different systems for determining what the numbers really mean.
A fascinating story that ran in The Chicago Tribune this week explores these “review aggregators” in depth, shedding more light on their methodology – and why major studios covet a “fresh” rating from the consensus. (Studio execs have even called RT urging them to reconsider certain reviews to change them from “rotten” to “fresh.”)
Here’s the sentence that stood out most to me: “But as rivals Metacritic and Movie Review Intelligence point out, Rotten Tomatoes can give its coveted ‘fresh’ rating to films that any number (and hypothetically all) of its counted reviewers don’t really love. And though all three sites present numerical averages in their ratings, the calculations involve subjective scoring by the aggregators themselves, not just the critics.”
It’s an interesting read. And while you’re checking out Rotten Tomatoes, be sure to stop by the CT Movies area while you’re there.