How To Refashion Your Flickchart Rankings As Ratings
When Jeremy and I came up with the concept for Flickchart, one of the biggest catalysts was that we felt like ratings were never quite good enough to truly describe how much we liked a film using some sort of numerical value. With only 4 or 5 stars, there just wasn’t enough nuance to differentiate from movie to movie. But what about 1-100 scales? Or 10 stars? It was just too arbitrary to pick a value in the middle somewhere, and say concretely that we liked a film exactly 7.7 out of 10, or 64 out of 100. What could we actually point to within ourselves to say that one movie was a 65 out of 100 instead of a 64 out of 100. It just seems too hard to quantify when judging a single movie’s merits on their own, and coming up with a value that represents one’s honest opinion. The rest, as they say, is history…
That being said, we know there are a lot of people that are justifiably more used to the idea of rating movies instead of ranking them. It’s no wonder, as we’ve been doing it reportedly since July 31, 1928 when New York Daily News critic, Irene Thirer (PDF link) , awarded “The Port of Missing Girls” one star. (Read more about the origins of star ratings at The Critical Numbers and Let’s Rate the Ranking Systems of Film Reviews – both from Carl Bialik, aka “The Numbers Guy”, of The Wall Street Journal.)
So if you’ve come from a long lineage of rating movies with stars (and let’s face it – we all have), here’s a little exercise to explain how you might reorient your Flickchart rankings and extrapolate them back to just simple ratings – using a few films from my list as examples.
To start things off, let’s take one of my favorite sequels: Back To The Future Part II. IMDb uses stars – ten stars in fact – to allow its users to rate movies. Robert Zemeckis’s sequel currently has a moderately strong showing at a global rating of 7.5 stars on IMDb. So that means it’s a 75. A C – if that was your grade on a school exam. Or on Roger Ebert’s scale of 1 to 4 stars: a solid 3 stars. I have this film pretty high on my list – currently my #24. I also have 1015 movies that I’ve seen and ranked on Flickchart.
So to find out my rating for the movie, I can simply do a little math: 24 / 1015 gives me 0.0236453201970443. This rounds out to 0.02, or 2% – the top 2% of my favorite films. This means I would give this film a 98. An A. 4 stars.
Now here’s where the rub is: The number of people who would actually pick Back To The Future Part II over the original Back To The Future are slim (you know you are!), so how does this calculate? Back To The Future is currently my #5. So 5/1015 equals 0.0049261083743842. Essentially a 100%. A+. 4 stars. Oh wait… how can both of these films be 4 stars? Exactly…
Let’s take another time-travel example: Primer. Primer is a nice, hard-to-follow, low-budget, indie, psychological sci-fi yarn. It comes in for me currently at my #350. Over at Rotten Tomatoes, this film has a 72% Fresh rating on the Tomatometer. This was derived from 113 different critical reviews; 81 positive and 32 negative.
If I do the numbers for this one, I get 350/1015. This equals 0.3448275862068966, or 0.34. A 66%. A D, if I were getting the grade… Now, when you look at the film independently on my list and see that it’s my #350 of all-time, would you expect to say that I would give it a D? A 66% sounds bad, right? But – I like the movie. I don’t know that if I were to rate it individually that I could give it a score of 66, but I trust that it’s fairly accurate in its placement that there are 349 movies that exceed its virtues, thus its position as my #350 is sound. It’s found its more honest rating by the nature of falling into the echelon of films that’s come against it. By comparing Primer to other films, it allows me the benefit of not being clouded by judgement in the moment of saying, “Yeah, I really liked that movie… It’s an A. 4 stars. A 90%.” Because it’s not. Not really. My enjoyment of the film is such that I do in fact think it’s good, but not as good as 349 other movies I also think are good.
Hopefully these examples prove that ratings are sometimes superfluous, and subject to frivolous, emotional uncertainty. It’s difficult to point to star ratings, or percentage ratings, and say that they can ever be truly accurate because of these exact circumstances that arise when having to make judgments upon any given film. This is why film criticism when written out in words will always outweigh the value of a number tied to a movie. Our hope is that by using comparisons between films (which you do subconsciously when thinking about any movie’s qualities) that you can achieve a similar outcome for how you really feel about particular movies. Looking at two films’ accolades next to one another brings to mind so many things about each film; many of the same things you might express if you were to write your feelings about them. In a world of stars, we hope to bring you back down to Earth.