At the start of 2018, I will officially become the Editor-in-Chief of the IEEE Transactions on Games (ToG). What is this, a new journal? Not quite: it is the continuation of the IEEE Transactions on Computational Intelligence and AI in Games (TCIAIG, which has been around since 2009), but with a shorter name and much wider scope.
This means that I will have the honor of taking over from Simon Lucas, who created TCIAIG and served as its inaugural Editor-in-Chief, and Graham Kendall, who took over from Simon. Under their leadership, TCIAIG has become the most prestigious journal for publishing work on artificial intelligence and games.
However, there is plenty of interesting work on games, with games or using games, which is not in artificial intelligence. Wouldn't it be great if we had a top-quality journal, especially one with the prestige of an IEEE Transactions, where such research could be published? This is exactly the thought behind the transformed journal. The scope of the new Transactions on Games simply reads:
The IEEE Transactions On Games publishes original high-quality articles covering scientific, technical, and engineering aspects of games.
This means that research on artificial intelligence for games, and games for artificial intelligence, is very welcome, just as it is was in TCIAIG. But ToG will also be accepting papers on human-computer interaction, graphics, educational and serious games, software engineering in games, virtual and augmented reality, and other topics.The scope specifically indicates "scientific, technical engineering aspects of games", and I expect that the vast majority of what is published will be empirial and/or quantitative in nature. In other words, game studies work belonging primarily in the humanities will be outside the scope of the new journal. The same goes for work that has nothing to do with games, for example, game theory applied to non-game domains. (While there is some excellent work on game theory applied to games, much game theory research has nothing to do with games that anyone would play.) Of course, acceptance/rejection decisions will be taken based on the recommendations of Associate Editors, who act on the recommendations of reviewers, leaving some room for interpretation of the exact boundaries of what type of research the journal will publish.
Already before I take over as Editor-in-Chief, I am working together with Graham to refresh the editorial board of the journal. I expect to keep many of the existing TCIAIG associate editors, but will need to replace some, and in particular add more associate editors with knowledge of the new topics where the journal will publish papers, and visibility in those research communities. I will also be working on reaching out to these research communities in various ways, to encourage researchers there to submit their best work the IEEE Transactions on Games.
Given that I will still be teaching, researching and leading a research group at NYU, I will need to cut down on some other obligations to free up time and energy for the journal. As a result, I will be very restrictive when it comes to accepting reviewing tasks and conference committee memberships in the near- to mid-term future. So if I turn down your review request, don't take it personally.
Needless to say, I am very excited about taking on this responsibility and work on making ToG the journal of choice for anyone doing technical, engineering or scientific research related to games.
Saturday, October 28, 2017
Sunday, July 23, 2017
Some advice for journalists writing about artificial intelligence
Dear Journalists,
I'd like to offer some advice on how to write better and more truthfully when you write articles about artificial intelligence. The reason I'm writing this is that there are a whole lot of very bad articles on AI (news articles and public interest articles) being published in newspapers and magazines. Some of them are utter nonsense, bordering on misinformation, some of them capture the gist of what goes on but are riddled with misunderstandings. No, I will not provide examples, but anyone working in AI and following the news can provide plenty. There are of course also many good articles about AI, but the good/bad ratio could certainly be improved.
First off, I understand. You're writing about an extremely fast-moving field full of jargon and enthusiastic people with grand visions. Given all this excitement, there must be plenty to write about, but you don't know much (or even anything) about the field. You probably know as little about AI as I know about, say, tannery. But where tannery evolves only very slowly and involves very concrete materials and mechanics, AI moves at breakneck speed and few of those words that get thrown around seem to refer to anything you can touch or see. There's a feeling that you need to write about the latest developments NOW before they are superseded, but it's hard to see where to even begin to decipher the strange things those AI researchers say. And of course you want to write something readable, and clickable, and you don't have much time. It can't be easy.
So here's a few things to keep in mind, and some concrete recommendations, for more critical and higher-quality reporting on AI. Some of this is based on my experience with being interviewed by journalists of varying technical proficiency, and with varying inclination to buy the story I was trying to sell them. Yes, we're all trying to sell something, even we curmudgeons in the ivory tower are trying to sell you something. More about this below.
Keep in mind: AI is a big field, and very diverse in terms of topics and methods used. (True, it's not as diverse as it should be in some other senses.) The main AI conferences (such as IJCAI, AAAI, ICML and NIPS) have thousands of attendees, and most of them only understand a small part of what goes on in the conference. When I go to one of these conferences, I can perhaps follow maybe 20% of the talks and get something out of them. While I might be a bit dim myself, it's rare to find anyone who can keep up to date with sub-fields as diverse as constraint propagation, deep learning and stochastic search.
Recommendation: Do not assume that researchers you talk to knows "what's going on right now in AI". Even more importantly, if someone says they know what's going on right now in AI, assume that they only know a small part of the big picture. Double-check with someone working in another field of AI.
Keep in mind: There is no such thing as "an artificial intelligence". AI is a collection of methods and ideas for building software that can do some of the things that humans can do with their brains. Researchers and developers develop new AI methods (and use existing AI methods) to build software (and sometimes also hardware) that can do something impressive, such as playing a game or drawing pictures of cats. However, you can safely assume that the same system cannot both play games and draw pictures of cats. In fact, no AI-based system that I've ever heard of can do more than a few different tasks. Even when the same researchers develop systems for different tasks based on the same idea, they will build different software systems. When journalists write that "Company X's AI could already drive a car, but it can now also write a poem", they obscure the fact that these are different systems and make it seem like there are machines with general intelligence out there. There are not.
Recommendation: Don't use the term "an AI" or "an artificial intelligence". Always ask what the limitations of a system is. Ask if it really is the same neural network that can play both Space Invaders and Montezuma's Revenge (hint: it isn't).
Keep in mind: AI is an old field, and few ideas are truly new. The current, awesome but a tad over-hyped, advances in deep learning have their roots in neural network research from the 1980s and 1990s, and that research in turn was based on ideas and experiments from all the way back in the 1940s. In many cases, cutting edge research consists of minor variations and improvements on methods that were devised before the researchers doing these advances were born. Backpropagation, the algorithm powering most of today's deep learning, is several decades old and was invented independently by multiple individuals. When IBM's Deep Blue computer won over Garry Kasparov and showed that computers could play Chess better than humans, the very core of the software was the Minimax algorithm, first implemented by Alan Turing in the 1940s. Turing, one of the fathers of both artificial intelligence and the wider field of computer science, also wrote the paper "On Computing Machinery and Intelligence" which was published in 1950. While that paper is most famous for introducing what is now called the Turing Test, it also contains the seeds of many of the key ideas in artificial intelligence.
Recommendations: Read Turing's 1950 paper. It's surprisingly easy and enjoyable to read, free from mathematical notation, and any technical terms can easily be glossed over. Marvel at how many of the key ideas of artificial intelligence were already in place, if only in embryonic form. When writing stories about exciting new developments, also consult an AI researcher that is old, or at least middle aged. Someone who was doing AI research before it was cool, or perhaps even before it was uncool, and so has seen a full cycle of AI hype. Chances are that person can tell you about which old idea this new advance is a (slight?) improvement on.
Keep in mind: Researchers always have something to sell. Obviously, those working in some kind of startup are looking to increase the valuation of their company and their chances of investment or acquisition. Those working in academia are looking for talk invitations, citations, promotions and so on. Those working in a large company will want to create interest in some product which might or might not be related to their actual results.
Recommendations: Don't believe the hype. Approach another researcher, who the people you're writing about did not forward you to, and ask if that person believes their claims.
Keep in mind: Much of "artificial intelligence" is actually human ingenuity. There's a reason why researchers and developers specialize in applications of AI to specific domains, such as robotics, games or translation: when building a system to solve a problem, lots of knowledge about the actual problem ("domain knowledge") is included in the system. This might take the role of providing special inputs to the system, using specially prepared training data, hand-coding parts of the system or even reformulating the problem so as to make it easier.
Recommendation: A good way of understanding which part of an "AI solution" are automatic and which are due to niftily encoded human domain knowledge is to ask how this system would work on a slightly different problem.
I'd better stop writing here, as this text probably already sounds far too grumpy. Look, I'm not grumpy, I'm barely even old. And I don't want to give the impression that there isn't a lot of exciting progress in AI these days. In fact, there are enough genuine advances to report on that we don't need to pad out the reporting with derivate research that's being sold as new. Let's all try to be honest, critical and accurate, shall we?
I'd like to offer some advice on how to write better and more truthfully when you write articles about artificial intelligence. The reason I'm writing this is that there are a whole lot of very bad articles on AI (news articles and public interest articles) being published in newspapers and magazines. Some of them are utter nonsense, bordering on misinformation, some of them capture the gist of what goes on but are riddled with misunderstandings. No, I will not provide examples, but anyone working in AI and following the news can provide plenty. There are of course also many good articles about AI, but the good/bad ratio could certainly be improved.
First off, I understand. You're writing about an extremely fast-moving field full of jargon and enthusiastic people with grand visions. Given all this excitement, there must be plenty to write about, but you don't know much (or even anything) about the field. You probably know as little about AI as I know about, say, tannery. But where tannery evolves only very slowly and involves very concrete materials and mechanics, AI moves at breakneck speed and few of those words that get thrown around seem to refer to anything you can touch or see. There's a feeling that you need to write about the latest developments NOW before they are superseded, but it's hard to see where to even begin to decipher the strange things those AI researchers say. And of course you want to write something readable, and clickable, and you don't have much time. It can't be easy.
So here's a few things to keep in mind, and some concrete recommendations, for more critical and higher-quality reporting on AI. Some of this is based on my experience with being interviewed by journalists of varying technical proficiency, and with varying inclination to buy the story I was trying to sell them. Yes, we're all trying to sell something, even we curmudgeons in the ivory tower are trying to sell you something. More about this below.
Keep in mind: AI is a big field, and very diverse in terms of topics and methods used. (True, it's not as diverse as it should be in some other senses.) The main AI conferences (such as IJCAI, AAAI, ICML and NIPS) have thousands of attendees, and most of them only understand a small part of what goes on in the conference. When I go to one of these conferences, I can perhaps follow maybe 20% of the talks and get something out of them. While I might be a bit dim myself, it's rare to find anyone who can keep up to date with sub-fields as diverse as constraint propagation, deep learning and stochastic search.
Recommendation: Do not assume that researchers you talk to knows "what's going on right now in AI". Even more importantly, if someone says they know what's going on right now in AI, assume that they only know a small part of the big picture. Double-check with someone working in another field of AI.
Keep in mind: There is no such thing as "an artificial intelligence". AI is a collection of methods and ideas for building software that can do some of the things that humans can do with their brains. Researchers and developers develop new AI methods (and use existing AI methods) to build software (and sometimes also hardware) that can do something impressive, such as playing a game or drawing pictures of cats. However, you can safely assume that the same system cannot both play games and draw pictures of cats. In fact, no AI-based system that I've ever heard of can do more than a few different tasks. Even when the same researchers develop systems for different tasks based on the same idea, they will build different software systems. When journalists write that "Company X's AI could already drive a car, but it can now also write a poem", they obscure the fact that these are different systems and make it seem like there are machines with general intelligence out there. There are not.
Recommendation: Don't use the term "an AI" or "an artificial intelligence". Always ask what the limitations of a system is. Ask if it really is the same neural network that can play both Space Invaders and Montezuma's Revenge (hint: it isn't).
Keep in mind: AI is an old field, and few ideas are truly new. The current, awesome but a tad over-hyped, advances in deep learning have their roots in neural network research from the 1980s and 1990s, and that research in turn was based on ideas and experiments from all the way back in the 1940s. In many cases, cutting edge research consists of minor variations and improvements on methods that were devised before the researchers doing these advances were born. Backpropagation, the algorithm powering most of today's deep learning, is several decades old and was invented independently by multiple individuals. When IBM's Deep Blue computer won over Garry Kasparov and showed that computers could play Chess better than humans, the very core of the software was the Minimax algorithm, first implemented by Alan Turing in the 1940s. Turing, one of the fathers of both artificial intelligence and the wider field of computer science, also wrote the paper "On Computing Machinery and Intelligence" which was published in 1950. While that paper is most famous for introducing what is now called the Turing Test, it also contains the seeds of many of the key ideas in artificial intelligence.
Recommendations: Read Turing's 1950 paper. It's surprisingly easy and enjoyable to read, free from mathematical notation, and any technical terms can easily be glossed over. Marvel at how many of the key ideas of artificial intelligence were already in place, if only in embryonic form. When writing stories about exciting new developments, also consult an AI researcher that is old, or at least middle aged. Someone who was doing AI research before it was cool, or perhaps even before it was uncool, and so has seen a full cycle of AI hype. Chances are that person can tell you about which old idea this new advance is a (slight?) improvement on.
Keep in mind: Researchers always have something to sell. Obviously, those working in some kind of startup are looking to increase the valuation of their company and their chances of investment or acquisition. Those working in academia are looking for talk invitations, citations, promotions and so on. Those working in a large company will want to create interest in some product which might or might not be related to their actual results.
Recommendations: Don't believe the hype. Approach another researcher, who the people you're writing about did not forward you to, and ask if that person believes their claims.
Keep in mind: Much of "artificial intelligence" is actually human ingenuity. There's a reason why researchers and developers specialize in applications of AI to specific domains, such as robotics, games or translation: when building a system to solve a problem, lots of knowledge about the actual problem ("domain knowledge") is included in the system. This might take the role of providing special inputs to the system, using specially prepared training data, hand-coding parts of the system or even reformulating the problem so as to make it easier.
Recommendation: A good way of understanding which part of an "AI solution" are automatic and which are due to niftily encoded human domain knowledge is to ask how this system would work on a slightly different problem.
I'd better stop writing here, as this text probably already sounds far too grumpy. Look, I'm not grumpy, I'm barely even old. And I don't want to give the impression that there isn't a lot of exciting progress in AI these days. In fact, there are enough genuine advances to report on that we don't need to pad out the reporting with derivate research that's being sold as new. Let's all try to be honest, critical and accurate, shall we?
Subscribe to:
Posts (Atom)