At the IEEE Conference on Computational Intelligence on Games, we decide on our annual best paper award through reviewer nomination followed by attendee vote. Concretely, this means that we select all papers with an exceptionally high average review score, and all papers with a high score where several reviewers have separately indicated that the paper should be nominated for a best paper award. The nominated papers (for CIG 2013, there are 9 nominated papers) are presented in one or two plenary sessions, and the conference attendees are asked to vote for the best paper award at the end of the presentations by way of secret ballot. The paper with most votes wins.
To paraphrase someone famous, I think that while this system for deciding a best paper award might not be perfect, it is better than all the other systems that have been tried. However, we recently received a mail questioning this policy, and suggesting that we instead select a best paper award by a special awards committee. I wrote a longish answer, which I'm reproducing below (slightly edited):
While some conferences decide awards in a small committee, other conferences (including some reputable ones like Gecco, Foundations of Digital Games and EuroGP) decide on their awards like us, through reviewer nomination and votes among conference attendees. There are several reasons for why one might want to do it this way:
* It makes the award more legitimate and avoids many potential conflicts of interest. If the award is decided on by a small committee, who is either secret or otherwise conducts their work in secrect, suspicions can always arise about why the committee decided like it did. This is especially true for a relatively small community where the conference organisers know many of the attendees personally. (It is worth noting that the chairs of CIG has this year not selected any papers directly at all; the 9 nominees are selected purely based on a cutoff in terms of reviewer scores and number of best paper nominations.)
* It ensures that a larger number of people with different expertise get to weight in on the award. This could be seen as ensuring quality through "wisdom of the crowds", or simply as ensuring that a set of experts whose fields of competence and interest that reflect the conference attendees get to decide. In a small committee, some fields of expertise are bound to be missing.
* It engages the audience. Attendees are more likely to pay close attention in a session where every attendee is expected to provide feedback, especially if this feedback has real impact.
* It incentivises good presentations. Twenty minutes is enough to present the core ideas of any conference paper. However, many researchers do not put sufficient effort into preparing and revising their presentations, and as a result conferences are often filled with poor presentations. Knowing that getting the best paper award or not depends partly on how well you can bring your message across tends to have a great effect on presentation quality.
As a personal anecdote, earlier this year I attended the best paper session at EuroGP in Vienna. The winner was a paper that was complex, unintuitive and challenged core notions of how genetic programming works. The presenter had gone to great lenghts to prepare a presentation that most of the audience actually understood - and walked off with a very well-deserved award. To me, that presentation was worth as much as rest of the conference together.