Computer science differs from most other academic fields in that conference papers are counted as real, citable publications. While journals are generally seen as more important, it is perfectly possible to get a tenured faculty position without ever publishing a journal paper.
This is mainly a good thing. The relatively low time from initial submission to publication (compared to traditional journals) makes sure that research gets published relatively timely. The deadlines make sure that people get their act together and get the paper submitted. Not all computer science papers are super-polished, but who cares? It's more important that people get their ideas and results out there for others to build on.
However, it has also had the result that many computer science conferences are hard to get into. In computational intelligence it's pretty common to accept about half of the submitted papers. Other fields are much harsher. In American academia, it is common to require that a conference accepts at most 30% of submitted papers in order to be counted as "selective". Many conferences are even stricter than that, with acceptance rates in the 10% range.
Why are acceptance rates so low? The often stated reason is that conference attendees don't have enough time to see lots of bad research being presented, and therefore only the best papers should be presented. However, this assumes that all attendees see all talks. If there are many talks (or posters) in parallel at a conference, people can choose which talk/poster they want to see. This is after all how it works in fields such as medicine and physics, where conference presentations are based on abstracts, and all or most abstracts are accepted.
The real reason is that conferences want to be prestigious through being exclusive. If a conference only accepts 13% of submitted papers, getting in is seen as an accomplishment for the author, something to put in your CV to impress various committees with. Paper acceptance becomes instant gratification for the scientist. For conferences, being an exclusive conference means that you get more submissions and more attendees, and as every organisation wants to perpetuate itself there is an incentive for conferences to be exclusive.
So what is the problem with this? There are several problems. To begin with, peer review is notoriously noisy. Most conferences have three or four reviewers per paper. It is very often the case that reviewers disagree with each other. One reviewer might think a paper is excellent, another thinks it is boring and derivative, a third thinks it is off-topic, and a fourth does not understand the paper and therefore thinks its badly written and/or wrong. How do you make a decision based on this? Being the program chair of a conference means making decisions based on conflicting reviewer judgements, knowing that your decisions will often be wrong. If you have a low acceptance rate, it is extremely probable that you will reject papers with one or two negative reviews, therefore rejecting many good papers.
Rejecting good papers is bad, because you are holding up science. Good papers, whose research should be out there for the world to see, don't get published. Authors don't get feedback, and get dissuaded from doing further research.
Why are reviewer opinions so conflicting? Part of the reason is certainly that there are few incentives to do a good job when reviewing papers, or even to review papers at all, so why bother? But more fundamentally, it is not always possible to tell the good papers from the bad ones. It is often as hard to spot greatness as it is to spot errors and misconduct. Many groundbreaking papers initially got rejected. There are probably many other groundbreaking results that the world doesn't know about, because the papers were never published.
If conferences are selective, reviewers will become nasty. It is a well-known fact that reviewers in many parts of computer science are more negative than in other sciences. This is probably because they might themselves submit to the same conference, or have submitted in the past and gotten rejected, and they don't want the paper they are reviewing to be treated better than their own paper was or will be. This breeds a culture of nastiness.
People respond to incentives. With selective conferences, researchers will start writing papers to maximise the likelihood of acceptance, and start doing research that can be written up in such papers. This is a disastrous consequence, because the easiest way to get into a selective conference is to write a paper which makes a small incremental advance, and which is not wrong in any way. The easiest part of a reviewer's job (certainly so for a nasty reviewer) is to find faults in the paper under review. I personally feel I have done a good job as a reviewer when I've found many faults in the paper I'm reviewing. Papers that make drastic claims or bold hypothesis are the easiest to shoot down. It is much harder to reject a paper because it is "not exciting". Thus selective conferences could conform to the Law of Jante, even though that is nobody's intention.
I suspect that this effect is even greater for people entering some sub-field of computer science, without being hand-led by someone who is already an insider. A newcomer to the field does not know which buzzwords to use, important people to cite and theories to subscribe to, making it very easy to shoot down an uncomfortable paper.
To sum this up, selective conferences are bad for science because good papers get rejected, because they perpetuate the myth that we can accurately judge the quality of a paper before it is even published, because they breed a culture of nastiness, and because they can reward mediocre research rather than ground-breaking research.
I see no reason why we should go on playing this game. Instead, we should have inclusive conferences, that accept all papers that are good enough. This could 10%, 50% or even 100% of them. Good enough could be defined as being on topic, substantially correct, intelligibly written and making some sort of contribution. This would of course mean that we can no longer judge a researcher based on what conferences he or she gets papers accepted in.
So, if we can't judge papers based on what conference they have been published in, how should we judge them? Well, there are two ways. The first is to actually read them. Shocking as this suggestion might seem, reading a paper is the only way to really know its value. Of course, it requires that you know the research field well enough to understand the paper, and that you are prepared to spend the time it takes to read it.
The other way is to wait a few years and see if the paper influences other peoples' research, and therefore gets cited. In addition to citation count, we could have some sort of voting mechanism, where attendants of a conference or members of a research community get to vote on the most important papers of conferences three of five years before. The problem with this is of course that you have to wait a few years.
But there is not really any way around it, if you don't want to or can't read the papers. Research takes time, and hiring or tenure decisions should not be based on the sort of incomplete information you can get from low-quality metrics such as in which venue a paper was published.
Tuesday, December 17, 2013
Thursday, July 18, 2013
On best paper award selection through reviewer nomination followed by attendee vote
At the IEEE Conference on Computational Intelligence on Games, we decide on our annual best paper award through reviewer nomination followed by attendee vote. Concretely, this means that we select all papers with an exceptionally high average review score, and all papers with a high score where several reviewers have separately indicated that the paper should be nominated for a best paper award. The nominated papers (for CIG 2013, there are 9 nominated papers) are presented in one or two plenary sessions, and the conference attendees are asked to vote for the best paper award at the end of the presentations by way of secret ballot. The paper with most votes wins.
To paraphrase someone famous, I think that while this system for deciding a best paper award might not be perfect, it is better than all the other systems that have been tried. However, we recently received a mail questioning this policy, and suggesting that we instead select a best paper award by a special awards committee. I wrote a longish answer, which I'm reproducing below (slightly edited):
While some conferences decide awards in a small committee, other conferences (including some reputable ones like Gecco, Foundations of Digital Games and EuroGP) decide on their awards like us, through reviewer nomination and votes among conference attendees. There are several reasons for why one might want to do it this way:
* It makes the award more legitimate and avoids many potential conflicts of interest. If the award is decided on by a small committee, who is either secret or otherwise conducts their work in secrect, suspicions can always arise about why the committee decided like it did. This is especially true for a relatively small community where the conference organisers know many of the attendees personally. (It is worth noting that the chairs of CIG has this year not selected any papers directly at all; the 9 nominees are selected purely based on a cutoff in terms of reviewer scores and number of best paper nominations.)
* It ensures that a larger number of people with different expertise get to weight in on the award. This could be seen as ensuring quality through "wisdom of the crowds", or simply as ensuring that a set of experts whose fields of competence and interest that reflect the conference attendees get to decide. In a small committee, some fields of expertise are bound to be missing.
* It engages the audience. Attendees are more likely to pay close attention in a session where every attendee is expected to provide feedback, especially if this feedback has real impact.
* It incentivises good presentations. Twenty minutes is enough to present the core ideas of any conference paper. However, many researchers do not put sufficient effort into preparing and revising their presentations, and as a result conferences are often filled with poor presentations. Knowing that getting the best paper award or not depends partly on how well you can bring your message across tends to have a great effect on presentation quality.
As a personal anecdote, earlier this year I attended the best paper session at EuroGP in Vienna. The winner was a paper that was complex, unintuitive and challenged core notions of how genetic programming works. The presenter had gone to great lenghts to prepare a presentation that most of the audience actually understood - and walked off with a very well-deserved award. To me, that presentation was worth as much as rest of the conference together.
To paraphrase someone famous, I think that while this system for deciding a best paper award might not be perfect, it is better than all the other systems that have been tried. However, we recently received a mail questioning this policy, and suggesting that we instead select a best paper award by a special awards committee. I wrote a longish answer, which I'm reproducing below (slightly edited):
While some conferences decide awards in a small committee, other conferences (including some reputable ones like Gecco, Foundations of Digital Games and EuroGP) decide on their awards like us, through reviewer nomination and votes among conference attendees. There are several reasons for why one might want to do it this way:
* It makes the award more legitimate and avoids many potential conflicts of interest. If the award is decided on by a small committee, who is either secret or otherwise conducts their work in secrect, suspicions can always arise about why the committee decided like it did. This is especially true for a relatively small community where the conference organisers know many of the attendees personally. (It is worth noting that the chairs of CIG has this year not selected any papers directly at all; the 9 nominees are selected purely based on a cutoff in terms of reviewer scores and number of best paper nominations.)
* It ensures that a larger number of people with different expertise get to weight in on the award. This could be seen as ensuring quality through "wisdom of the crowds", or simply as ensuring that a set of experts whose fields of competence and interest that reflect the conference attendees get to decide. In a small committee, some fields of expertise are bound to be missing.
* It engages the audience. Attendees are more likely to pay close attention in a session where every attendee is expected to provide feedback, especially if this feedback has real impact.
* It incentivises good presentations. Twenty minutes is enough to present the core ideas of any conference paper. However, many researchers do not put sufficient effort into preparing and revising their presentations, and as a result conferences are often filled with poor presentations. Knowing that getting the best paper award or not depends partly on how well you can bring your message across tends to have a great effect on presentation quality.
As a personal anecdote, earlier this year I attended the best paper session at EuroGP in Vienna. The winner was a paper that was complex, unintuitive and challenged core notions of how genetic programming works. The presenter had gone to great lenghts to prepare a presentation that most of the audience actually understood - and walked off with a very well-deserved award. To me, that presentation was worth as much as rest of the conference together.
Thursday, January 10, 2013
CfP: PCG workshop 2013
Call for Papers
The fourth workshop on Procedural Content Generation in Games (PCG 2013)
Organized in conjunction with the International Conference on Foundations of Digital Games (FDG 2013)
Important Dates
Full paper submission: March 4
Decision notification: March 25
Camera-ready deadline: April 1
Workshop held: between May 14 and 17
Website: http://pcg.fdg2013.org/
Procedural content generation (PCG) in games, a field of growing popularity, offers hope for substantially reducing the authoring burden in games, improving our theoretical understanding of game design, and enabling entirely new kinds of games and playable experiences. The goal of this workshop is to advance knowledge in PCG by bringing together researchers and fostering discussion about the current state of the field. We invite contributions on all aspects of generating game content, using any method. Both descriptions of new algorithms, theoretical or critical analysis and empirical studies of implementations and applications are welcome.
We solicit submissions as either full papers about results from novel research (8 pages) or short papers describing works-in-progress (4 pages). Papers may be about variety of topics within procedural content generation, including but not limited to:
Offline or realtime procedural generation of levels, stories, quests, terrain, environments, and other game content
Case studies of industrial application of procedural generation
Issues in the construction of mixed-mode systems with both human and procedurally generated content
Adaptive games using procedural content generation
Procedural generation of game rulesets (computer or tabletop)
Techniques for procedural animation
Issues in combining multiple procedural content generation techniques for larger systems
Procedural content generation in non-digital games
Procedural content generation as a game mechanic
Automatic game balancing through generated content
Techniques for games that evolve and/or discover new game variants
Player and/or designer experience in procedural content generation
Procedural content generation during development (e.g. prototyping, playtesting, etc.)
Theoretical implications of procedural content generation
How to incorporate procedural generation meaningfully into game design
Lessons from historical examples of procedural content generation (including post-mortems)
Authors are especially encouraged to submit work that has the potential to be adopted by the digital games industry.
Organizers
Alex Pantaleev, SUNY Oswego
Gillian Smith, Northeastern University
Joris Dormans, Amsterdam University of Applied Sciences
Antonio Coelho, Universidade do Porto
The fourth workshop on Procedural Content Generation in Games (PCG 2013)
Organized in conjunction with the International Conference on Foundations of Digital Games (FDG 2013)
Important Dates
Full paper submission: March 4
Decision notification: March 25
Camera-ready deadline: April 1
Workshop held: between May 14 and 17
Website: http://pcg.fdg2013.org/
Procedural content generation (PCG) in games, a field of growing popularity, offers hope for substantially reducing the authoring burden in games, improving our theoretical understanding of game design, and enabling entirely new kinds of games and playable experiences. The goal of this workshop is to advance knowledge in PCG by bringing together researchers and fostering discussion about the current state of the field. We invite contributions on all aspects of generating game content, using any method. Both descriptions of new algorithms, theoretical or critical analysis and empirical studies of implementations and applications are welcome.
We solicit submissions as either full papers about results from novel research (8 pages) or short papers describing works-in-progress (4 pages). Papers may be about variety of topics within procedural content generation, including but not limited to:
Offline or realtime procedural generation of levels, stories, quests, terrain, environments, and other game content
Case studies of industrial application of procedural generation
Issues in the construction of mixed-mode systems with both human and procedurally generated content
Adaptive games using procedural content generation
Procedural generation of game rulesets (computer or tabletop)
Techniques for procedural animation
Issues in combining multiple procedural content generation techniques for larger systems
Procedural content generation in non-digital games
Procedural content generation as a game mechanic
Automatic game balancing through generated content
Techniques for games that evolve and/or discover new game variants
Player and/or designer experience in procedural content generation
Procedural content generation during development (e.g. prototyping, playtesting, etc.)
Theoretical implications of procedural content generation
How to incorporate procedural generation meaningfully into game design
Lessons from historical examples of procedural content generation (including post-mortems)
Authors are especially encouraged to submit work that has the potential to be adopted by the digital games industry.
Organizers
Alex Pantaleev, SUNY Oswego
Gillian Smith, Northeastern University
Joris Dormans, Amsterdam University of Applied Sciences
Antonio Coelho, Universidade do Porto