Computer science differs from most other academic fields in that conference papers are counted as real, citable publications. While journals are generally seen as more important, it is perfectly possible to get a tenured faculty position without ever publishing a journal paper.
This is mainly a good thing. The relatively low time from initial submission to publication (compared to traditional journals) makes sure that research gets published relatively timely. The deadlines make sure that people get their act together and get the paper submitted. Not all computer science papers are super-polished, but who cares? It's more important that people get their ideas and results out there for others to build on.
However, it has also had the result that many computer science conferences are hard to get into. In computational intelligence it's pretty common to accept about half of the submitted papers. Other fields are much harsher. In American academia, it is common to require that a conference accepts at most 30% of submitted papers in order to be counted as "selective". Many conferences are even stricter than that, with acceptance rates in the 10% range.
Why are acceptance rates so low? The often stated reason is that conference attendees don't have enough time to see lots of bad research being presented, and therefore only the best papers should be presented. However, this assumes that all attendees see all talks. If there are many talks (or posters) in parallel at a conference, people can choose which talk/poster they want to see. This is after all how it works in fields such as medicine and physics, where conference presentations are based on abstracts, and all or most abstracts are accepted.
The real reason is that conferences want to be prestigious through being exclusive. If a conference only accepts 13% of submitted papers, getting in is seen as an accomplishment for the author, something to put in your CV to impress various committees with. Paper acceptance becomes instant gratification for the scientist. For conferences, being an exclusive conference means that you get more submissions and more attendees, and as every organisation wants to perpetuate itself there is an incentive for conferences to be exclusive.
So what is the problem with this? There are several problems. To begin with, peer review is notoriously noisy. Most conferences have three or four reviewers per paper. It is very often the case that reviewers disagree with each other. One reviewer might think a paper is excellent, another thinks it is boring and derivative, a third thinks it is off-topic, and a fourth does not understand the paper and therefore thinks its badly written and/or wrong. How do you make a decision based on this? Being the program chair of a conference means making decisions based on conflicting reviewer judgements, knowing that your decisions will often be wrong. If you have a low acceptance rate, it is extremely probable that you will reject papers with one or two negative reviews, therefore rejecting many good papers.
Rejecting good papers is bad, because you are holding up science. Good papers, whose research should be out there for the world to see, don't get published. Authors don't get feedback, and get dissuaded from doing further research.
Why are reviewer opinions so conflicting? Part of the reason is certainly that there are few incentives to do a good job when reviewing papers, or even to review papers at all, so why bother? But more fundamentally, it is not always possible to tell the good papers from the bad ones. It is often as hard to spot greatness as it is to spot errors and misconduct. Many groundbreaking papers initially got rejected. There are probably many other groundbreaking results that the world doesn't know about, because the papers were never published.
If conferences are selective, reviewers will become nasty. It is a well-known fact that reviewers in many parts of computer science are more negative than in other sciences. This is probably because they might themselves submit to the same conference, or have submitted in the past and gotten rejected, and they don't want the paper they are reviewing to be treated better than their own paper was or will be. This breeds a culture of nastiness.
People respond to incentives. With selective conferences, researchers will start writing papers to maximise the likelihood of acceptance, and start doing research that can be written up in such papers. This is a disastrous consequence, because the easiest way to get into a selective conference is to write a paper which makes a small incremental advance, and which is not wrong in any way. The easiest part of a reviewer's job (certainly so for a nasty reviewer) is to find faults in the paper under review. I personally feel I have done a good job as a reviewer when I've found many faults in the paper I'm reviewing. Papers that make drastic claims or bold hypothesis are the easiest to shoot down. It is much harder to reject a paper because it is "not exciting". Thus selective conferences could conform to the Law of Jante, even though that is nobody's intention.
I suspect that this effect is even greater for people entering some sub-field of computer science, without being hand-led by someone who is already an insider. A newcomer to the field does not know which buzzwords to use, important people to cite and theories to subscribe to, making it very easy to shoot down an uncomfortable paper.
To sum this up, selective conferences are bad for science because good papers get rejected, because they perpetuate the myth that we can accurately judge the quality of a paper before it is even published, because they breed a culture of nastiness, and because they can reward mediocre research rather than ground-breaking research.
I see no reason why we should go on playing this game. Instead, we should have inclusive conferences, that accept all papers that are good enough. This could 10%, 50% or even 100% of them. Good enough could be defined as being on topic, substantially correct, intelligibly written and making some sort of contribution. This would of course mean that we can no longer judge a researcher based on what conferences he or she gets papers accepted in.
So, if we can't judge papers based on what conference they have been published in, how should we judge them? Well, there are two ways. The first is to actually read them. Shocking as this suggestion might seem, reading a paper is the only way to really know its value. Of course, it requires that you know the research field well enough to understand the paper, and that you are prepared to spend the time it takes to read it.
The other way is to wait a few years and see if the paper influences other peoples' research, and therefore gets cited. In addition to citation count, we could have some sort of voting mechanism, where attendants of a conference or members of a research community get to vote on the most important papers of conferences three of five years before. The problem with this is of course that you have to wait a few years.
But there is not really any way around it, if you don't want to or can't read the papers. Research takes time, and hiring or tenure decisions should not be based on the sort of incomplete information you can get from low-quality metrics such as in which venue a paper was published.