John Langford argues that machine learning is too easy. He doesn't specify exactly what he means by this, but it seems to be that it's possible to publish papers and make a career in one area of machine learning without even understanding the core ideas of other areas.
Apparently, he thinks this is a problem. But why?
I could agree that it would be a problem if we were talking about science here. But we aren't. I've long since stopped pretending that I do science. (Except for the remote possibility that something I do might have an impact on a real science, such as biology or psychology.) We are just not studying the natural world.
I don't think of it as engineering either, as an engineer is meant to construct that that actually work and make economic sense. Most of what I do is pretty far from being useful or even reliable. Instead I think of myself as an inventor, practicing blue-sky invention of algorithms and toy applications without direct economic pressure. (Role model: Gyro Gearloose.)
So in a field of invention where people are inventing things following different paradigms and variations on a common theme of learning/optimization, is it a problem that most of the inventors have only a very hazy idea of what the others are doing? Not necessarily, as we are not all working towards the same goal (at least in the near term) and don't need to agree on anything.
Of course, it's great when you can combine knowledge from different research fields and come up with a nice synthesis - this is an almost surefire way to "be creative", and it's necessary that someone does it every once in a while. But for the most part, I don't feel like digesting hundreds of pages of dormative formulas in order to understand e.g. statistical learning theory. I feel my time would be much better spent just getting on with my own inventions, and reading up on stuff that's directly relevant to it (or seemingly completely unrelated, in order to look for new applications).