Sunday, July 23, 2017

Some advice for journalists writing about artificial intelligence

Dear Journalists,

I'd like to offer some advice on how to write better and more truthfully when you write articles about artificial intelligence. The reason I'm writing this is that there are a whole lot of very bad articles on AI (news articles and public interest articles) being published in newspapers and magazines. Some of them are utter nonsense, bordering on misinformation, some of them capture the gist of what goes on but are riddled with misunderstandings. No, I will not provide examples, but anyone working in AI and following the news can provide plenty. There are of course also many good articles about AI, but the good/bad ratio could certainly be improved.

First off, I understand. You're writing about an extremely fast-moving field full of jargon and enthusiastic people with grand visions. Given all this excitement, there must be plenty to write about, but you don't know much (or even anything) about the field. You probably know as little about AI as I know about, say, tannery. But where tannery evolves only very slowly and involves very concrete materials and mechanics, AI moves at breakneck speed and few of those words that get thrown around seem to refer to anything you can touch or see. There's a feeling that you need to write about the latest developments NOW before they are superseded, but it's hard to see where to even begin to decipher the strange things those AI researchers say. And of course you want to write something readable, and clickable, and you don't have much time. It can't be easy.

So here's a few things to keep in mind, and some concrete recommendations, for more critical and higher-quality reporting on AI. Some of this is based on my experience with being interviewed by journalists of varying technical proficiency, and with varying inclination to buy the story I was trying to sell them. Yes, we're all trying to sell something, even we curmudgeons in the ivory tower are trying to sell you something. More about this below.

Keep in mind: AI is a big field, and very diverse in terms of topics and methods used. (True, it's not as diverse as it should be in some other senses.) The main AI conferences (such as IJCAI, AAAI, ICML and NIPS) have thousands of attendees, and most of them only understand a small part of what goes on in the conference. When I go to one of these conferences, I can perhaps follow maybe 20% of the talks and get something out of them. While I might be a bit dim myself, it's rare to find anyone who can keep up to date with sub-fields as diverse as constraint propagation, deep learning and stochastic search.

Recommendation: Do not assume that researchers you talk to knows "what's going on right now in AI". Even more importantly, if someone says they know what's going on right now in AI, assume that they only know a small part of the big picture. Double-check with someone working in another field of AI.

Keep in mind: There is no such thing as "an artificial intelligence". AI is a collection of methods and ideas for building software that can do some of the things that humans can do with their brains. Researchers and developers develop new AI methods (and use existing AI methods) to build software (and sometimes also hardware) that can do something impressive, such as playing a game or drawing pictures of cats. However, you can safely assume that the same system cannot both play games and draw pictures of cats. In fact, no AI-based system that I've ever heard of can do more than a few different tasks. Even when the same researchers develop systems for different tasks based on the same idea, they will build different software systems. When journalists write that "Company X's AI could already drive a car, but it can now also write a poem", they obscure the fact that these are different systems and make it seem like there are machines with general intelligence out there. There are not.

Recommendation: Don't use the term "an AI" or "an artificial intelligence". Always ask what the limitations of a system is. Ask if it really is the same neural network that can play both Space Invaders and Montezuma's Revenge (hint: it isn't).

Keep in mind: AI is an old field, and few ideas are truly new. The current, awesome but a tad over-hyped, advances in deep learning have their roots in neural network research from the 1980s and 1990s, and that research in turn was based on ideas and experiments from all the way back in the 1940s. In many cases, cutting edge research consists of minor variations and improvements on methods that were devised before the researchers doing these advances were born. Backpropagation, the algorithm powering most of today's deep learning, is several decades old and was invented independently by multiple individuals. When IBM's Deep Blue computer won over Garry Kasparov and showed that computers could play Chess better than humans, the very core of the software was the Minimax algorithm, first implemented by Alan Turing in the 1940s. Turing, one of the fathers of both artificial intelligence and the wider field of computer science, also wrote the paper "On Computing Machinery and Intelligence" which was published in 1950. While that paper is most famous for introducing what is now called the Turing Test, it also contains the seeds of many of the key ideas in artificial intelligence.

Recommendations: Read Turing's 1950 paper. It's surprisingly easy and enjoyable to read, free from mathematical notation, and any technical terms can easily be glossed over. Marvel at how many of the key ideas of artificial intelligence were already in place, if only in embryonic form. When writing stories about exciting new developments, also consult an AI researcher that is old, or at least middle aged. Someone who was doing AI research before it was cool, or perhaps even before it was uncool, and so has seen a full cycle of AI hype. Chances are that person can tell you about which old idea this new advance is a (slight?) improvement on.

Keep in mind: Researchers always have something to sell. Obviously, those working in some kind of startup are looking to increase the valuation of their company and their chances of investment or acquisition. Those working in academia are looking for talk invitations, citations, promotions and so on. Those working in a large company will want to create interest in some product which might or might not be related to their actual results.

Recommendations: Don't believe the hype. Approach another researcher, who the people you're writing about did not forward you to, and ask if that person believes their claims.

Keep in mind: Much of "artificial intelligence" is actually human ingenuity. There's a reason why researchers and developers specialize in applications of AI to specific domains, such as robotics, games or translation: when building a system to solve a problem, lots of knowledge about the actual problem ("domain knowledge") is included in the system. This might take the role of providing special inputs to the system, using specially prepared training data, hand-coding parts of the system or even reformulating the problem so as to make it easier.

Recommendation: A good way of understanding which part of an "AI solution" are automatic and which are due to niftily encoded human domain knowledge is to ask how this system would work on a slightly different problem.

I'd better stop writing here, as this text probably already sounds far too grumpy. Look, I'm not grumpy, I'm barely even old. And I don't want to give the impression that there isn't a lot of exciting progress in AI these days. In fact, there are enough genuine advances to report on that we don't need to pad out the reporting with derivate research that's being sold as new. Let's all try to be honest, critical and accurate, shall we?

15 comments:

Pat Hayes said...

Excallent!! Speaking as an old AI researcher myself, please allow me to second everything you have said here, in spades. Especially the point about not talking about AI in the singular, as in "an AI". There is, and never has been, such a thing as "an AI". There is a large, diverse research area called "AI", and there are programs and systems which do various things and are sometimes called "AI programs" really for no better reason than that the people who built them self-identify as AI people working in an AI field.

thiamid said...

excellent, too! everybody thinks of AI as an assault or some dangerous black-box. truth is: AI is like everything thinkable a concept and not more. everything else is taken from pop culture.

Unknown said...

I could not agree more - though I think the battle about the term "an AI" is being lost, it's not even close. There is a new sense of the term "AI" and it is a countable noun that roughly means an artificially intelligent agent. If it is not in any dictionaries or glossaries yet, it won't be long. Sorry gents, language evolves. Bummer.

Secret Helper said...

Superb article! Nicely said and so glad you are pointing people to the Turing article. The AI is currently overhyped by many, all while they don't understand the core concepts behind it.

Doug Henningsen said...

Wow, what a spoiler. If I look through your other posts will I see one addressed to Parents on Santa and the Tooth Fairy. Seriously, good article.

Umair said...

This is perfectly timed, thank you so much for sharing this and saving the planet from "an AI" invasion taking over all jobs much like an alien invasion! in the news of course.

Daniel Owen van Dommelen said...

Okay, while I agree with most of your post, here's a quote from one of your other articles: "That's right. Game specificity. The problem is that improving how well an artificial intelligence plays a particular game is not necessarily helping us improve artificial intelligence in general." -- Seems like we all like to use the term "an artificial intelligence" sometimes ;)

Ora said...

Excellent writeup! I keep telling people that whoever claims that "AI is taking over" or "AI is an existential threat to humankind" has never participated in a large-scale, real-world software project.

Salvatore D'Agostino said...

Thanks for this, its interesting to see this cycle, 2017 certainly has had about as many sentences with AI and IoT as once could imagine. Glad these were well put.

Joshua Grams said...

@Daniel Owen von Dommelen: I don't think those two uses of the phrase conflict at all? In this article "an AI" refers to a mythical general-purpose AI that can tackle any problem. In the other article "an AI" refers to an implementation of a particular set of AI techniques for a particular purpose, and pointing out that, again, those techniques are not generally applicable to all problems.

Anna Gilligan said...

Great perspective, but as someone new to the field trying to educate others on these hard truths it can be hard. I want to share everything I learn, but have to recognize they may be overwhelmed even by the "buzzwords" and that I myself can't possibly keep pace with the specific expertise areas. So, there's definitely a balance between making it an accessible topic, then correcting and informing in the right ways.

Darcy said...

Basically journos write for google analytics. The content is crap, rushed and repeated so many times that it gets boring very quickly. In fact anything on AI or blokchain just doesn't get exciting anymore.

Darcy said...

Journos these days write for google analytics. The info is shallow, repetitive and quite often wrong. You have to really sieve through the rubbish to get anything meaningful and who has time for that?. Its a sorry state of affairs in tech journalism.

Dave said...

This is a problem with science reporting in general, which has been a major pet peeve of mine ever since I gained enough knowledge during undergrad to spot the BS. Bad reporting on AI hits especially close to home now that I'm a data scientist. And even when the reporting itself is passable, the headline writing is very often misleading and sensationalist.

Part of the problem is that reporters are incentivized to tell exciting stories and headline writers are incentivized to get clicks, regardless of whether their headline matches up with the article it's supposed to represent. And let's face it, the truth about AI isn't going to be very exciting to laypersons in the way it is to those of us who work with AI. "Stanford researchers find a new activation function that moderately improves the performance of some particular class of neural networks," isn't sexy. But "Facebook manages to shut down chatbot just before it could become evil" is.

Hardly a week goes by that I don't have to explain to someone that we're in no danger of a robot uprising, that this is an invention of the pop science media and really great science fiction writers, and that zero people I know who actually work with AI are concerned in any way.

In any event, thanks for this post. I included it in my own "Things I liked this month" post over at breakingbayes.com (shameless plug, I know.)

inquiringmind said...

I like the article. It made sense and made important points on the limits of AI. I will try to heed those lessons.

But there is a far deeper question than "what is AI"? The question that troubles me is "Why is there AI"?

I'm shocked that this most basic question does not get more attention, because it is so very key to the massive investments in technologies to replace humans. There is no magic in AI, despite the bold promises of a better, greater world (make the world great again?)

The implications and consequences of breakneck AI development and even singularity are huge, yet the conversation has left out the very people who will be most affected.

So unethical. There are very profound problems that our scientists and great minds (and rich investors ) could and should address: alarming levels of inequity, climate change, social collapse. Despite the claims of curing cancer and designer babies, AI falls short of seeking to resolve real problems and instead seeks to copy, plagiarize, reduce and replace humans. ..for the profit of a very select few.

Spokespersons for AI are fork tongued: they are talking about extending lifespan and curing diseases, but meanwhile, will make billions of people redundant. WHo will benefit? The very very rich of course. For the rest of us, unemployment, illness, food shortages, homelessness, and failure to have a meaningful life.

Lets call for a conversation by the people on all aspects of AI, and slow down the hysterical pace.