Tuesday, April 22, 2014

Metablogging my research

When I started this blog in 2006, it was with the intention that I should write short blog posts explaining my own research in a more accessible way. It appears, however, that I haven't found the time to do much of that recently. (I was better at doing so in the first few years of the blog's existence - see some examples here, here and here.) The few blog posts I've written in recent years tend to be more general discussions about things such as human intelligence, conference acceptance rates or why I work so much.

While I still intend to write more blog posts explaining my research in more accessible terms, I am not saying I will get around to actually doing so anytime soon. However, some other researchers have been kind enough to write accessibly about research done in my group. So instead of a blog post about my research, this is a blog post about blog posts about my research. Here are some picks - if you have blogged about my research (or research I supervised) and want to be included in this list, tell me and I'll add it!

First we have Mike Cook at Goldsmiths, University of London. He's written several excellent posts on my team's work over at his own project blog for his Angelina project. For example, see this post on Antonis' work with Sentient Sketchbook, or this post on the Ropossum generator for Cut the Rope levels or maybe this post on the Procedural Procedural Level Generator Generator.

Tommy Thompson at University of Derby has produced a number of very well-written posts on his blog about games and AI recently, partly for the benefit of his students and partly for a general public. This post on PCG and the Mario AI Competition is a very good example.

Alex Champandard at AIGameDev.com (a very useful resource for industry-style game AI) did an interview with me a long time ago, soon after I finished my PhD. Re-reading my answers, I guess I was fairly naive back then. I probably still am.

There has also been a number of articles about my work in New Scientist, one of the more well-known and respected popular science magazines. These articles are generally written by competent and well-informed professional science journalists (unlike some of the coverage of my work in the mainstream press which I won't link to here, as they are more likely to confuse than enlighten). Other good, short pieces (blog posts rather than articles, though it's hard to distinguish the two) from quality news outlets are the following from Kotaku, Wired and Guardian. In addition to such secondary sources there seems to be dozens of "tertiary" blog posts that are essentially just reformulations of some of the posts and articles I've linked to above - the Mario AI competition in particular is the focus of most of them.

I'll finish with the following post, which was written in 2007 and is a rather acidic attempt to ridicule some of the earliest work I in my PhD. Apparently, I took this seriously enough back then to write an elaborate response post. I think I would have been much more comfortable with that kind of criticism these days, when I have established a position in the research community. However, as it's safer to criticise a PhD student than a respected researcher, nobody criticises me anymore. Which is a shame: there must be some people out there who think that what I do is completely misguided, and constructive, well-informed criticism would be very welcome. I've even tried actively seeking critical views (and of course promote myself) by posting stories about my work on Slashdot (for example here, here and here). Well, it turns out you do get critical comments, just not always well-informed ones...

So perhaps the conclusion is that while promiting your own and others' work (and mine!) to other researchers, students and the general public is all great, it's not necessarily the best way of finding out what people you respect really think about your work. For that, you have to go to a conference and get everybody really drunk.

Thursday, February 06, 2014

The Curse of Increasing Marginal Work Utility, or Why I Work So Much

Academics constantly complain about being overworked, yet they also praise their freedom to decide how and when to work as one of the great advantages of their job. This seems contradictory. If they have such freedom, why are they so overworked?

Part of the answer is of course that you are never really done. You can always do better. Write more and better papers, get better research results and deliver better teaching. But this is true for most white-collar jobs these days - you could always work more on your next budget, product description, market report, technical drawing or whatever it is you do. If you are good at what you do, it should be enough to work normal hours, go home and get some good rest and return refreshed next day to do more work. Few successful academics do this. There seems to be some extra factor that affect academics, contributing to that they always seem stressed out - and that you very rarely see successful academics below the age of 50 working reasonable hours.

That extra factor is increasing marginal work utility. Simply put, the more you work, the more you get out of each extra work hour. This gives you a very strong incentive to work as much as possible, which is clearly not a very good idea if you also want to live your life. Let me explain.

We begin by imagining I work the statutory work week of around 40 hours. If I work efficiently, I will then have time to prepare and conduct my teaching, supervise my masters students, do my administrative duties (including answering lots of mails), review papers for some conferences and devote the minimum acceptable time to supervising my PhD students. These are all tasks that I must do - not doing them would mean that I am simply not doing my job. This amount of work is clearly not enough, as I do not get time to do any research - though I could of course try to take credit for the work of PhD students that I don't supervise enough.

If I put in some more time, I have time to do some of the research where I am not in a "driving" role. I could participate some more in what my PhD students are doing, help my masters students write up papers based on their theses (which I have supervised), do my part in joint research projects with people in other institutions, and serve on organising committees of conferences. Also, write parts of collaborative grant proposals. These are activities where I am mostly fulfilling obligations to other people, though they are also activities where I get relatively much visibility from limited effort. But it does not mean doing any research on my own.

So let's say I put in even more time. I can now advertise some of the research I (and my collaborators) have been doing and help shape the directions of my own and others' research, for example by going to give invited talks, or write parts of survey articles. I can start new initiatives, e.g. competitions, discussion groups or grant proposals. These are things that make a lot of sense to do and I enjoy doing them, but it is still not the same as actually doing my own research.

Finally, when I'm done with all of the above and there are no urgent mails awaiting reply in my inbox, I could sit down with my own idea, write some code of my own, run my own experiments and write a paper with myself as first author. In other words, do some research of my own. This rarely happens, as I rarely get to this point and when I do I rarely have any energy left.

The utility of that sixtieth work hour is so much higher than the tenth or twentieth because I can use it do my own research. If I work 60 hours rather than 40, I don't get 50% more of my own research done, but almost infinitely much more, as I would otherwise get almost none of my own research done. Given that I am interested in doing research of my own, there is a very strong incentive to work very long hours. It is not that I am uninterested in any of the other parts of my job - I enjoy all of them, except grant writing and meetings with management - but I am more interested in research.

You could compare the job of an academic to having a teaching and administration job and having research as a hobby. Except that the "hobby" is the presumed core of the job as advertised, and the reason you applied for the job.

An interesting detail is that it wasn't always like this. Back when I was a PhD student, and to a large extent still when I was a postdoc, my marginal work utility was basically flat as I spent most of my time on my own research. But as I was good at research I got promoted to a position where I had to spend most of my time doing something else. (Incidentally, I have seen quite a few excellent researchers who are less than excellent at teaching and administration.)

Finally, let me point out that I am not complaining, just explaining. And I am certainly not blaming anyone, especially not my wonderful colleagues and students. After all, I love my job and I would not trade it for any other job. I just hope to have made the logic clear behind why I, and probably many others like me, work such long hours.

Tuesday, December 17, 2013

Against "selective" conferences

Computer science differs from most other academic fields in that conference papers are counted as real, citable publications. While journals are generally seen as more important, it is perfectly possible to get a tenured faculty position without ever publishing a journal paper.

This is mainly a good thing. The relatively low time from initial submission to publication (compared to traditional journals) makes sure that research gets published relatively timely. The deadlines make sure that people get their act together and get the paper submitted. Not all computer science papers are super-polished, but who cares? It's more important that people get their ideas and results out there for others to build on.

However, it has also had the result that many computer science conferences are hard to get into. In computational intelligence it's pretty common to accept about half of the submitted papers. Other fields are much harsher. In American academia, it is common to require that a conference accepts at most 30% of submitted papers in order to be counted as "selective". Many conferences are even stricter than that, with acceptance rates in the 10% range.

Why are acceptance rates so low? The often stated reason is that conference attendees don't have enough time to see lots of bad research being presented, and therefore only the best papers should be presented. However, this assumes that all attendees see all talks. If there are many talks (or posters) in parallel at a conference, people can choose which talk/poster they want to see. This is after all how it works in fields such as medicine and physics, where conference presentations are based on abstracts, and all or most abstracts are accepted.

The real reason is that conferences want to be prestigious through being exclusive. If a conference only accepts 13% of submitted papers, getting in is seen as an accomplishment for the author, something to put in your CV to impress various committees with. Paper acceptance becomes instant gratification for the scientist. For conferences, being an exclusive conference means that you get more submissions and more attendees, and as every organisation wants to perpetuate itself there is an incentive for conferences to be exclusive.

So what is the problem with this? There are several problems. To begin with, peer review is notoriously noisy. Most conferences have three or four reviewers per paper. It is very often the case that reviewers disagree with each other. One reviewer might think a paper is excellent, another thinks it is boring and derivative, a third thinks it is off-topic, and a fourth does not understand the paper and therefore thinks its badly written and/or wrong. How do you make a decision based on this? Being the program chair of a conference means making decisions based on conflicting reviewer judgements, knowing that your decisions will often be wrong. If you have a low acceptance rate, it is extremely probable that you will reject papers with one or two negative reviews, therefore rejecting many good papers.

Rejecting good papers is bad, because you are holding up science. Good papers, whose research should be out there for the world to see, don't get published. Authors don't get feedback, and get dissuaded from doing further research.

Why are reviewer opinions so conflicting? Part of the reason is certainly that there are few incentives to do a good job when reviewing papers, or even to review papers at all, so why bother? But more fundamentally, it is not always possible to tell the good papers from the bad ones. It is often as hard to spot greatness as it is to spot errors and misconduct. Many groundbreaking papers initially got rejected. There are probably many other groundbreaking results that the world doesn't know about, because the papers were never published.

If conferences are selective, reviewers will become nasty. It is a well-known fact that reviewers in many parts of computer science are more negative than in other sciences. This is probably because they might themselves submit to the same conference, or have submitted in the past and gotten rejected, and they don't want the paper they are reviewing to be treated better than their own paper was or will be. This breeds a culture of nastiness.

People respond to incentives. With selective conferences, researchers will start writing papers to maximise the likelihood of acceptance, and start doing research that can be written up in such papers. This is a disastrous consequence, because the easiest way to get into a selective conference is to write a paper which makes a small incremental advance, and which is not wrong in any way. The easiest part of a reviewer's job (certainly so for a nasty reviewer) is to find faults in the paper under review. I personally feel I have done a good job as a reviewer when I've found many faults in the paper I'm reviewing. Papers that make drastic claims or bold hypothesis are the easiest to shoot down. It is much harder to reject a paper because it is "not exciting". Thus selective conferences could conform to the Law of Jante, even though that is nobody's intention.

I suspect that this effect is even greater for people entering some sub-field of computer science, without being hand-led by someone who is already an insider. A newcomer to the field does not know which buzzwords to use, important people to cite and theories to subscribe to, making it very easy to shoot down an uncomfortable paper.

To sum this up, selective conferences are bad for science because good papers get rejected, because they perpetuate the myth that we can accurately judge the quality of a paper before it is even published, because they breed a culture of nastiness, and because they can reward mediocre research rather than ground-breaking research.

I see no reason why we should go on playing this game. Instead, we should have inclusive conferences, that accept all papers that are good enough. This could 10%, 50% or even 100% of them. Good enough could be defined as being on topic, substantially correct, intelligibly written and making some sort of contribution. This would of course mean that we can no longer judge a researcher based on what conferences he or she gets papers accepted in.

So, if we can't judge papers based on what conference they have been published in, how should we judge them? Well, there are two ways. The first is to actually read them. Shocking as this suggestion might seem, reading a paper is the only way to really know its value. Of course, it requires that you know the research field well enough to understand the paper, and that you are prepared to spend the time it takes to read it.

The other way is to wait a few years and see if the paper influences other peoples' research, and therefore gets cited.  In addition to citation count, we could have some sort of voting mechanism, where attendants of a conference or members of a research community get to vote on the most important papers of conferences three of five years before. The problem with this is of course that you have to wait a few years.

But there is not really any way around it, if you don't want to or can't read the papers. Research takes time, and hiring or tenure decisions should not be based on the sort of incomplete information you can get from low-quality metrics such as in which venue a paper was published.

Thursday, July 18, 2013

On best paper award selection through reviewer nomination followed by attendee vote

At the IEEE Conference on Computational Intelligence on Games, we decide on our annual best paper award through reviewer nomination followed by attendee vote. Concretely, this means that we select all papers with an exceptionally high average review score, and all papers with a high score where several reviewers have separately indicated that the paper should be nominated for a best paper award. The nominated papers (for CIG 2013, there are 9 nominated papers) are presented in one or two plenary sessions, and the conference attendees are asked to vote for the best paper award at the end of the presentations by way of secret ballot. The paper with most votes wins.

To paraphrase someone famous, I think that while this system for deciding a best paper award might not be perfect, it is better than all the other systems that have been tried. However, we recently received a mail questioning this policy, and suggesting that we instead select a best paper award by a special awards committee. I wrote a longish answer, which I'm reproducing below (slightly edited):

While some conferences decide awards in a small committee, other conferences (including some reputable ones like Gecco, Foundations of Digital Games and EuroGP) decide on their awards like us, through reviewer nomination and votes among conference attendees. There are several reasons for why one might want to do it this way:

* It makes the award more legitimate and avoids many potential conflicts of interest. If the award is decided on by a small committee, who is either secret or otherwise conducts their work in secrect, suspicions can always arise about why the committee decided like it did. This is especially true for a relatively small community where the conference organisers know many of the attendees personally. (It is worth noting that the chairs of CIG has this year not selected any papers directly at all; the 9 nominees are selected purely based on a cutoff in terms of reviewer scores and number of best paper nominations.)

* It ensures that a larger number of people with different expertise get to weight in on the award. This could be seen as ensuring quality through "wisdom of the crowds", or simply as ensuring that a set of experts whose fields of competence and interest that reflect the conference attendees get to decide. In a small committee, some fields of expertise are bound to be missing.

* It engages the audience. Attendees are more likely to pay close attention in a session where every attendee is expected to provide feedback, especially if this feedback has real impact.

* It incentivises good presentations. Twenty minutes is enough to present the core ideas of any conference paper. However, many researchers do not put sufficient effort into preparing and revising their presentations, and as a result conferences are often filled with poor presentations. Knowing that getting the best paper award or not depends partly on how well you can bring your message across tends to have a great effect on presentation quality.

As a personal anecdote, earlier this year I attended the best paper session at EuroGP in Vienna. The winner was a paper that was complex, unintuitive and challenged core notions of how genetic programming works. The presenter had gone to great lenghts to prepare a presentation that most of the audience actually understood - and walked off with a very well-deserved award. To me, that presentation was worth as much as rest of the conference together.

Thursday, January 10, 2013

CfP: PCG workshop 2013

Call for Papers

The fourth workshop on Procedural Content Generation in Games (PCG 2013)
Organized in conjunction with the International Conference on Foundations of Digital Games (FDG 2013)

Important Dates

Full paper submission: March 4
Decision notification: March 25
Camera-ready deadline: April 1
Workshop held: between May 14 and 17

Website: http://pcg.fdg2013.org/

Procedural content generation (PCG) in games, a field of growing popularity, offers hope for substantially reducing the authoring burden in games, improving our theoretical understanding of game design, and enabling entirely new kinds of games and playable experiences. The goal of this workshop is to advance knowledge in PCG by bringing together researchers and fostering discussion about the current state of the field. We invite contributions on all aspects of generating game content, using any method. Both descriptions of new algorithms, theoretical or critical analysis and empirical studies of implementations and applications are welcome.

We solicit submissions as either full papers about results from novel research (8 pages) or short papers describing works-in-progress (4 pages). Papers may be about variety of topics within procedural content generation, including but not limited to:

    Offline or realtime procedural generation of levels, stories, quests, terrain, environments, and other game content
    Case studies of industrial application of procedural generation
    Issues in the construction of mixed-mode systems with both human and procedurally generated content
    Adaptive games using procedural content generation
    Procedural generation of game rulesets (computer or tabletop)
    Techniques for procedural animation
    Issues in combining multiple procedural content generation techniques for larger systems
    Procedural content generation in non-digital games
    Procedural content generation as a game mechanic
    Automatic game balancing through generated content
    Techniques for games that evolve and/or discover new game variants
    Player and/or designer experience in procedural content generation
    Procedural content generation during development (e.g. prototyping, playtesting, etc.)
    Theoretical implications of procedural content generation
    How to incorporate procedural generation meaningfully into game design
    Lessons from historical examples of procedural content generation (including post-mortems)

Authors are especially encouraged to submit work that has the potential to be adopted by the digital games industry.

Organizers

Alex Pantaleev, SUNY Oswego
Gillian Smith, Northeastern University
Joris Dormans, Amsterdam University of Applied Sciences
Antonio Coelho, Universidade do Porto

Friday, November 30, 2012

Call for Expressions of Interest: Hosting IEEE Conference on Computational Intelligence and Games 2015


The IEEE Conference on Computational Intelligence and Games is the premier annual event for researchers applying computational and artificial intelligence techniques to games. The domain of the conference includes all sorts of CI/AI applied to all sorts of games, including board games, video games and mathematical games. Recent editions have been held in Granda, Spain (2012) and Seoul, Korea (2011). The next CIG will be held in Niagara Falls, Canada (2013), very likely to be followed by Dortmund, Germany (2014). Since the start of the conference series in 2005, there has been a trend towards higher numbers of both submissions to and attendants at successive conferences.

We are now looking for expressions of interest for people willing to host CIG 2015. Given  the IEEE Computational Intelligence Society policy that conferences alternate between Europe, North America and Asia, we are looking for an Asian location for CIG 2015.

Expressions of interest should be sent to Julian Togelius (julian@togelius.com) by December 15; questions about the procedure should be directed to the same address. All expressions of interest will be forwarded to members of the Games Technical Committee for discussion and a straw poll, and the winning submitter will be invited to submit a formal application to host the conference to the IEEE Computational Intelligence Society's Conference Committee.

An expression of interest should be a text document of one or a few pages. Apart from the proposed location and dates, it should include the names and short biographies of a general chair and preferably some other proposed organisation committee members, e.g. program chair and local chair. It should also include a brief description of the proposed site in terms of facilities available, touristic attractions and communications. No budgetary information is necessary at this stage.

Past CIG conferences: http://www.ieee-cig.org/

Games Technical Committee: http://cis.ieee.org/games-tc.html

Saturday, October 06, 2012

Call for papers: FDG 2013

Foundations of Digital Games 2013
Call for papers, workshops, panels, experimental games and participation

14-17 May 2013
Chania, Crete, Greece
http://www.fdg2013.org/

We invite researchers and educators to submit to FDG 2013 and share insights
and cutting-edge research related to game technologies and their use. FDG 2013
will include presentations of peer-reviewed papers, invited talks by
high-profile industry and academic leaders, panels, and posters. The conference
will also host a technical demo session, a Research and Experimental Games
Festival, and a Doctoral Consortium. The technical demo session will include
novel tools, techniques, and systems created for games. The Research and
Experimental Games Festival will showcase the latest experimental and research
games. The Doctoral Consortium serves as a forum for Ph.D. students to present
their dissertation research, exchange experiences with peers, discuss ideas for
future research and receive feedback from established games researchers and the
wider FDG community.

Important dates
---

Workshop proposals:
* Submission: 28 October 2012
* Notification: 11 November 2012

Papers, panel proposals, doctoral consortium:
* Submission: 10 December 2012
* Notification: 1 March 2013
* Camera-ready: 18 March 2013

Research and experimental game festival:
* Submission: 13 January 2013
* Notification: 22 February 2013
* Camera-ready: 18 March 2013

Posters and demos:
* Submission: 4 March 2013
* Notification: 18 March 2013
* Camera-ready: 31 March 2013

Full papers
---
Full papers must not exceed 8 pages in length. Authors should submit to either
the general conference or one of the following tracks:

* Game studies, social science track (games, players, and their role in society
  and culture)

* Game studies, humanities track (aesthetic, philosophical, and ontological
  aspects of games and play)

* Game design (methods, techniques, studies)

* Serious games (building and evaluating games for a purpose, learning in games)

* Game education (preparing students to design and develop games)

* Artificial intelligence (agents, motion/camera planning, navigation,
  adaptivity, content creation, dialog, authoring tools)

* Game technology (engines, frameworks, graphics, networking, animation)

* Interaction and player experience (game interfaces, player metrics, modeling
  player experience)

Panels
---
Panel submissions should be in the form of a 2-page extended abstract
describing the focus of the panel, providing a list of confirmed speakers, and
indicating their areas of expertise relative to the topic. We encourage both
debate-style panels that include representatives advocating several positions
on a topic of disagreement, and emerging-area style panels that consolidate and
explain recent work on a subject of interest to the FDG community.

Research and experimental games festival
---

The Festival is designed to showcase playable games that are experimental or
have a research component. Submitted games could be significant because they
are designed to answer a research question or experiment with the design
process, or because their technological components represent research
advancements. Works in progress are permitted, but the game will ideally
include at least one playable level (or comparable unit of play time). Works
that have not yet reached this stage may be more suitable for the conference
demo track. In addition to submitting the game, submissions should also include
a 2–4 page writeup of the project. The text should outline the game's research
context, and how the work demonstrates rigor in methodology and a contribution
to knowledge. Submissions should also include a link to the game hosted on your
own server or one of your choosing. We welcome and encourage works exploring a
variety of disciplinary approaches and methodologies, including
interdisciplinary collaborations. It is the responsibility of the contributor
to ensure all necessary information is accessible at all times during the
judging period (13 January 2013 to 22 February 2013).

Posters and demos
---
The poster and demo track provides a forum for late-breaking and in-progress
work to be presented to the community. Submissions should be in the form of a
2-page extended abstract. The interactive technical demo event will showcase
the latest tools, techniques, and systems created for games by academic or
industrial research groups. (Playable games should instead be submitted to the
Research and Experimental Games Festival.)

Workshop proposals
---
The conference workshops are full-day and half-day sessions focused on emerging
game-related topics. These workshops provide an informal setting for new
developments to be presented, discussed and demonstrated. We are particularly
interested in topics that bridge different communities and disciplines. Concise
workshop proposals (2 pages) should include: an extended abstract, the
objectives and expected outcome of the workshop, the planned activities, the
background of the organizer(s), the anticipated number of participants, and the
means for soliciting and selecting participants.

Doctoral consortium
---
We invite PhD students to apply to the Doctoral Consortium, a forum to provide
PhD students with early feedback on their research directions, from fellow
students, researchers, and experienced faculty in the area. The consortium is
intended primarily for PhD students who intend to pursue a career in academia,
who will soon propose, or have recently proposed, their research. To apply,
doctoral students should submit a CV, a 3-page extended abstract describing
their proposed research, and a support letter from their PhD advisor. The
abstract should address the goals of your research, the proposed approach and
how it differs from prior work, any results you may have, and your plans for
completing the work. Invited Doctoral Consortium students will give a
presentation and present a poster at the conference.


On behalf of the organizing committee:

General chairs: Georgios N. Yannakakis and Espen Aarseth
Program chairs: Kristine Jørgensen and James Lester

Proceedings chair: Mark J. Nelson
Workshops chair: Julian Togelius
Industrial relations chair: Alessandro Canossa
Local chairs: Kostas Karpouzis and Alexandros Potamianos

Track chairs: Kevin Kee, Rilla Khaled, Olli Leino,
              R. Michael Young, Jose Zagal (more to be announced)