Thread:Talk:Article feedback/More accurate rankings of ratings should consider number of reviewers

I like the rating feature and I think it's well implemented. I knew this information must be aggregated somewhere, and I was happy to finally find the Article feedback dashboard. What I found was surprising.

At the moment that I looked, the Forbes list of billionaires (2012) page was ranked number 1, and the Plastic page was ranked number 2. Not that individual rankings matter so much, because it's not a contest. But I think it is of tremendous value to see examples of what the community considers to be among the most valuable content on Wikipedia. And I think the Forbes list, while not a bad page, does not rise to that level. So looked at the number of reviewers.

The Forbes article had about 20 reviewers at the time. The Plastic article had around almost 500. I believe that a 4.85 with 500 reviews definitely beats a 4.86 with 20 reviewers. The question is, what is the right way to factor in the count in the algorithm. I'll look into whether there's a standard way to do this in the world of statistics, but maybe somebody here knows. But I think it's a conversation worth having. I did a quick look through this Talk page to see if this thread has already been started, and I apologize if I missed it. I didn't see one.