As the end of the pandemic draws near, one of the many things I am excited about is to be able to go to physical conferences again. A year of virtual conferences have shown us that videoconferencing is in no way a viable replacement for a real conference; at best it's a complement. I am extremely excited to go and meet my friends and colleagues from all over the world and exchange ideas and experience, but I am perhaps even more excited to be able to introduce a new generation of PhD students to their academic community, see them make friends and brainstorm the ideas that will fuel the next wave of scientific advances. It is mainly for their sake that I hope some in-person events may happen already this year; it's heartbreaking to see a generation of junior researchers being deprived of their opportunities for networking and professional and social growth for any longer.
However, I'm only looking forward to going to the smaller, specialized conferences. In my field (AI and Games), that would be such conferences as FDG, IEEE CoG, and AIIDE. I am not really looking forward to the large, "prestigious" conferences such as AAAI, IJCAI, and NeurIPS. In fact, if I had to choose (and did not worry about the career prospects of my students), I would only go to the smaller gatherings.
Why? Largely because I find the big conferences boring. There's just not much there for me. In a large and diverse field such as artificial intelligence, the vast majority of paper presentations are just not relevant for any given attendee. If I drop into a paper session at random (on, say, constraint satisfaction or machine translation or game theory or something else I'm not working on), there's probably around 20% chance I even understand what's going on, and 10% chance I find it interesting. Sure, I might be less clever than the average AI researcher, but I seriously doubt any single attendee really cares about more than a small majority of the sessions at a conference such as AAAI.
This could to some extent have been remedied if the presentations were made so as to be understood by a broader audience. And I don't mean "broader audience" as in "your parents", but as in "other AI researchers". (Apologies if your parents are AI researchers. It must be rough.) However, that's not how this works. These conglomerate conferences are supposed to be the top venues for technical work in each sub-field, so presenters are mostly addressing the 3% of conference attendees that work on the same topic. Of course, it does not help that AI researchers are generally NOT GOOD at giving talks about their work, and are not incentivized to get better. The game is all about getting into these conferences, not about presenting the work once you were accepted to present it.
Ah yes, this brings us to the topic of acceptance rates. I have long objected to selective conferences. Basically, the top venues in various computer science domains are not only big but also accept a very small percentage of submitted papers. Typically 20% or even less. This was once motivated by the constraints of the venue - there supposedly wasn't space for more presentations. While this was always a questionable excuse, the fact that conferences keep their low acceptance rates even while going virtual (!) shows without any shade of doubt that it is all about the prestige. Hiring, tenure, and promotion committees, particularly in the US, count publications in "top" conferences as a proxy for research quality.
I get the need for proxies when evaluating someone for hiring or promotion because actually understanding someone else's research deeply, unless they're working on exactly the same thing as you, is really hard. Still, we need to stop relying on selective conference publications to judge research quality, because (1) acceptance into a selective conference does not say much about research quality, (2) the selectiveness makes these conferences worse as conferences. First things first. Why is acceptance into a selective conference not a good signal of research quality? Those of us who have been involved in the process in different roles (author, reviewer, meta-reviewer, area chair etc) over a number of years have plenty of war stories about how random this process can be. Reviewers may be inexperienced, paper matching may be bad, and above all there's a mindset that we are mostly looking for reasons to reject papers. If a paper looks different or smells off, a reason will be found to reject it. (Yes, reader, I see that you are right now reminded about your own unfair rejections.) But we don't have to rely on anecdotes. There's data. Perhaps the largest study on this showed that decisions were 60% arbitrary. Since this experiment was done in 2014, remarkably little has changed in the process. It sometimes seems that computer scientists suffer from a kind of self-inflicted Stockholm syndrome: the system we built for ourselves sucks, but it's our system so we will defend it.
I personally think that what is actually being selected for is partly familiarity: a paper has a better chance of getting in if it looks more or less like what you expect a paper in the field to look like. This means a certain conservatism in form, or even selection for mediocrity. Papers at large conferences are simply more boring. Usually, I find the more interesting and inspiring papers at smaller conferences and workshops than in the corresponding topical sessions at large conferences. I don't have any data to back this up, but the fact that program chairs often urge their reviewers to accept novel and "high-risk" papers point to that they perceive this phenomenon as well. If the most interesting papers were actually accepted, we would not be hearing such things.
Another perspective on low acceptance rates is the following: If a competent researcher has done sound research and written it up in a readable paper, they should not have to worry about getting it published. If the research is not wrong and is a contribution of some sort it should get published, right? It's not like we are running out of pixels to view the papers. No-one benefits from good research not being published. However, in the current state of things, even the best researchers submit work they know is good with the knowledge that there's a good chance it might not get accepted because someone somewhere disliked it or didn't get it. Pretty bizarre when you think about it. Is computer science full of masochists, or why do we do this to ourselves? The emergence of a preprint-first practice, where papers are put on arXiv before or at the same time as they are submitted for review, has helped the matter somewhat by making research more easily accessible, but is perversely also used as an excuse for not dealing with the low acceptance rate problem in the first place.
Back to the conference itself. Ignoring that most papers are uninteresting to most attendees, maybe these large conferences are great for networking? Yes, if you already know everyone. For someone like me, who has been in AI long enough to have drunk beer with authors of many of my favorite papers, AAAI and NeurIPS are opportunities for serial hangovers. For someone new to the community, it certainly seems that a smaller conference where people may actually notice you standing alone by the wall and go up and talk to you would be a much better opportunity to get to know people. Basically, a conference with thousands of attendees does not provide community.
So who, or what, are large conferences for? I honestly do not see a reason for their existence as they currently function. As covid has forced all conferences to go temporarily virtual, maybe we should consider only bringing back the smaller and more specialized conferences? If some imaginary Federal Trade Commission of Science decided to break up every conference with more than 500 attendees, like it was Standard Oil or AT&T, I don't think we would miss much.
But wait. Isn't there a role for a large gathering of people where you could go to learn what happens outside your own narrow domain, absorb ideas from other subfields, and find new collaborators with diverse expertise? I think there is. Current large conferences don't really provide that function very well, because of what gets presented and how it gets presented (as stated above). So I do think there should be something like broad conferences where you could find out what's going in all of AI. But you should not be able to submit papers to such a conference. Instead, you would need to submit your paper to a smaller, more permissive conference for your particular subfield. After the papers are presented at the smaller conference, the organizers and/or audience choose a subset of authors of the most notable papers to go present their papers at the large conference. But that presentation must be explicitly target people outside of their technical subfield. In other words, if I was to present our new work on procedural content generation through reinforcement learning, I would have to present it so that folks working in constraint satisfaction, learning theory, and machine translation all understood it and got something out of it. And I would expect the same of their presentations. This would mean presenting in a very different way than we usually present at a conference. But it would make for a large conference I would want to go to.
1 comment:
Back in 2011, as program chair of IJCAI, I introduced the Best Paper track from (smaller) Sister Conferences precisely for this reason. So a broader audience could keep in touch with the most exciting and groundbreaking work in the specialized areas.
Post a Comment