Soundness vs. Importance
When making the accept/reject decision for conference publications, we judge papers on both scientific soundness (are the claims in the paper true?) and some sort of feeling of importance. These two things should be decoupled. On the one hand, we don't give ourselves enough time to evaluate scientific soundness, and on the other, we don't need to, and should not try to evaluate importance at the time of publication because we are not very good at doing so. A somewhat amusing/annoying side-effect of this evaluation model is that many papers dedicate several paragraphs to inflating the importance of the results, sometimes obscuring the actual results. Another rumored side-effect is authors cherry-picking data to make their results seem more important. Similarly, when is the last time you read about a negative result in a programming languages conference or journal?Publication vs. Presentation
The conference system conflates the decision to accept for publication with the decision to accept for presentation. Despite many conferences going to multi-track formats, the number of available presentation slots (and their cost) plays a strong role in limiting the number of papers accepted for publication. On the other hand, the cost of publishing a paper is very low (and should be much lower than what it is for the ACM). The only limit on publishing papers should be with regards to scientific soundness.Publication Delay
One of the supposed benefits of the conference system is fast turn-around. However, I know of many papers that take over a year, sometimes multiple years, to be published because they receive multiple rejections before being accepted for publication, and not because the paper is scientifically unsound. This phenomenon slows the scientific process because the dissemination of scientific results is delayed. (Many researchers do not feel it is appropriate to post pre-prints on their web pages.) This phenomenon also increases the reviewing load on the community because some papers end up with nine reviews when three would have sufficed.Reviewer Expertise
Papers are not always reviewed by the reviewers with the most expertise. On a fairly routine basis, I find minor inaccuracies and problems in published papers in the areas that I'm expert. In such situations, I feel that I should have reviewed the paper but didn't get a chance to because I wasn't on that particular program committee. I was on other PC's that year. One of the reasons this happens is that there is large overlap in topics among the major programming language conferences.Narrow Audience
Most papers in our area are written with a very narrow audience in mind: other experts in programming languages. The reason for this is obvious: program committees consist of experts in programming languages. This narrow writing style reduces the readability of our papers to people outside the area, including the people in industry who are in a position to apply the research.Reviewer Load
Recently, program chairs have been discouraging PC members from getting help from their graduate students and friends. There are a couple problems with this. First, reviewing is an important part of training graduate students. Second, with over 20 papers assigned to a PC member over a couple months time, doing high-quality reviews becomes a heroic effort. For academics whose job description includes reviewing, this kind of load is manageable in a macho sort of way. For those in industry (not counting a few research laboratories), I imagine this kind of load is a non-starter.Post-Publication Reviews and Corrections
The current publication process does not provide a sufficiently low-overhead mechanism for reviews and/or corrections to come along after a paper has been published. In particular, it would be great to have an easy way for someone that is trying to replicate an experiment, or apply a technique in industry, to post a review providing further data.The Outline of a Solution
Instead of a mixture of conferences and journals, we should just have one big continual reviewing process for all research on programming languages. Anyone who wants to participate could submit papers (anonymously or not) and anyone could review any submission (anonymously or not). Both the paper submissions and the reviews would be public. Authors would likely revise their papers as reviews come in. If it turns out that some people submit lots of papers without doing any reviews, then some simple rules regarding the ratio of paper submissions to reviews could be created. A group of editors would evaluate reviews and skim papers to decide whether a paper is scientifically sound and therefore deserving of publication, that is, deserving of their seal of approval. The editor's meta-review and decision would also be public. Of course, reviews of papers that have already been published would be allowed and could lead to a revision or even retraction of a paper. Regarding presentations, conferences and workshops could select from the recently published papers as they see fit.This outline of a solution has much in common with the Open Review model already used by several conferences (ICLR, ICML, and AKBC at openreview.net), though it is directly inspired by my positive experiences with the code review process in the C++ Boost open-source software group.