Gyp Rosetti
Banned
I generally look at the score and if it's received a solid enough rating (7.3+) and believe there to be an upside beyond that (based on the director, cast, certain reviews, genre, my expectations), will rent it.
I generally look at the score and if it's received a solid enough rating (7.3+) and believe there to be an upside beyond that (based on the director, cast, certain reviews, genre, my expectations), will rent it.
The list of movies is solid. Even Kicky should admit that. The actual rankings is pretty weak.
Here's an example of how the math works as to why the recency bias is structural for the IMDB 250, given its own post because I had to compile the information independently.
The formula is listed as:
weighted rating (WR) = (v ÷ (v+m)) × R + (m ÷ (v+m)) × C where:
R = average for the movie (mean) = (Rating)
v = number of votes for the movie = (votes)
m = minimum votes required to be listed in the Top 250 (currently 3000)
C = the mean vote across the whole report (currently 6.9)
Lets compare two movies that have the exact same average rating. In this instance we'll say it's 8.0. An older movie might have 20k votes (an actual representative number for several). A newer movie, like The Dark Knight, might have 400,000.
Lets plug those into the formula for their weighted averages for IMDB 250 purposes.
Old movie: (20,000 ÷ (20,000+3,000)) × 8.0 + (3,000 ÷ (20,000+3,000)) × 6.9 = roughly 7.85
New movie: (400,000 ÷ (400,000+3,000)) × 8.0 + (3,000 ÷ (400,000+3,000)) × 6.9= almost exactly 8.0
That creates a built in spread of about .15 weighted points for films that get more votes, which are largely newer films. That matters a LOT given that the spread between #141 and #250 is .2 points.
the porn chick from this movie has a pretty big cameo in Entourage recently
Here's an example of how the math works as to why the recency bias is structural for the IMDB 250, given its own post because I had to compile the information independently.
The formula is listed as:
weighted rating (WR) = (v ÷ (v+m)) × R + (m ÷ (v+m)) × C where:
R = average for the movie (mean) = (Rating)
v = number of votes for the movie = (votes)
m = minimum votes required to be listed in the Top 250 (currently 3000)
C = the mean vote across the whole report (currently 6.9)
Lets compare two movies that have the exact same average rating. In this instance we'll say it's 8.0. An older movie might have 20k votes (an actual representative number for several). A newer movie, like The Dark Knight, might have 400,000.
Lets plug those into the formula for their weighted averages for IMDB 250 purposes.
Old movie: (20,000 ÷ (20,000+3,000)) × 8.0 + (3,000 ÷ (20,000+3,000)) × 6.9 = roughly 7.85
New movie: (400,000 ÷ (400,000+3,000)) × 8.0 + (3,000 ÷ (400,000+3,000)) × 6.9= almost exactly 8.0
That creates a built in spread of about .15 weighted points for films that get more votes, which are largely newer films. That matters a LOT given that the spread between #141 and #250 is .2 points.
Makes sense though. The more a movie has been seen, the more reliable the rankings, the higher the rating.
an 8 with 500,000 votes is definitely worth more than an 8 with 50000
Except for the fact that it's a tremendous structural disadvantage for entire class of films. The difference will actually be magnified the farther away you get from 6.9 as well.
The goal is to rate the BEST 250 films period, not the best 250 films of the last 30 years. The IMDB top 250 list comes a lot closer to doing the latter than doing the former, and frankly it sucks at the latter too. It is a measure of popularity over the last ten years only. Nothing more. It is barely more useful for discerning quality than looking at box office receipts.
I've said this before and I'll ask again. What discerns overall popularity from inferred quality?
I've said this before and I'll ask again. What discerns overall popularity from inferred quality?
I'll put it this way. An entertainment center sold at Wal-Mart, made by O'Sullivan, is probably 1000x more popular than all the entertainment centers ever sold by Ethan Allen.
One is popular the other is quality.
I've said this before and I'll ask again. What discerns overall popularity from inferred quality?
I'll put it this way. An entertainment center sold at Wal-Mart, made by O'Sullivan, is probably 1000x more popular than all the entertainment centers ever sold by Ethan Allen.
One is popular the other is quality.