George Lozano:

The digital age has brought forth many changes to scholarly publishing. For instance, we now read papers, not journals. We used to read papers physically bound with other papers in an issue within a journal, but now we just read papers, downloaded individually, and independently of the journal. In addition, journals have become easier to produce. A physical medium is no longer necessary, so the production, transportation, dissemination and availability of papers have drastically increased. The former weakened the connection between papers and their respective journals; papers now are more likely to stand on their own. The latter allowed the creation of a vast number of new journals that, in principle, could easily compete at par with long-established journals.

In a previous blog, and paper, we documented that the most widely used index of journal quality, the impact factor, is becoming a poorer predictor of the quality of the papers therein. The IF already had many well documented and openly acknowledged problems, so that analysis just added another problem to its continued. The data set used for that analysis was as comprehensive as possible, and included thousand of journals. During subsequent discussions, the issue came up of whether the patterns we documented at a large scale also applied to the handful of elite journals that have traditionally deemed to be the best.

Hence, in a follow-up paper we examined Nature, Science Cell, Lancet, NEJM, JAMA and PNAS (just in case, the last 3 are New Engl. J. Med., J. Am. Med. Ass., and Proc. Natl. Acad. Sci.). We identified the 1% and 5% most cited papers in every year in the past 40 years, and determined the percentage of these papers being published by each of these elite journals. In all cases, except for JAMA and the Lancet, the proportion of top papers published by elite journals has been declining since the late-eighties.