Over a year ago, I wrote a post tabulating the share of AER: Insights authors who have also published in a top-5 journal*. (The answer was 67%, significantly higher than most other journals, except those that generally solicit papers, like the Journal of Economic Literature.)
Now that AER: Insights is in its second year of publishing and has 60 forthcoming/published articles, I decided to revisit this question, again using Academic Sequitur data. The graph below shows the percent of authors that (a) have published/are forthcoming in a given journal in 2018-2020 and (b) have had at least one top-5 article published since 2000. The journals below are the top ten journals based on that metric.
With a score of 66%, AER: Insights still has the highest share of top-5 authors among journals where submissions are not generally solicited.** The next-highest journal, Theoretical Economics, is five percentage points behind. (There is some indication that the share for AER: Insights is coming down: for articles accepted in 2020, the top-5 share was “only” 60%.)
What if we condition on having two or more top-5 publications? That actually causes AER: Insights to move up in the ranking, overtaking Brookings Papers on Economic Activity.
Whether this pattern exists because AER: Insights is extremely selective or because less-established scholars are reluctant to submit their work to a new-ish journal or for some other reason is impossible to know without submission data. But no matter how you look at it, the group currently publishing in AER: Insights is quite elite.
*Top 5 is defined as American Economic Review, Econometrica, Journal of Political Economy, Quarterly Journal of Economics, and Review of Economic Studies.
**AER: Insights would be even higher-ranked by this metric (#3) if we ignored top-5 publications in American Economic Review. Therefore, this pattern is not driven by the fact that both journals are published by the AEA.
Part I: agricultural economics, lab experiments, field experiments & economics of education
Most of us have a sense that it is more difficult to get certain topics published in the top 5 economics journals (American Economic Review, Econometrica, Journal of Political Economy, Quarterly Journal of Economics, and Review of Economic Studies), but there is not much hard data on this. And if a particular topic appears infrequently in top journals, it may simply be because it’s a relatively rare topic overall.
To get more evidence on this issue, I used Academic Sequitur data, which covers the majority of widely-read journals in economics. The dataset I used contains articles from 139 economics journals and spans the years 2000-2019. On average, 6 percent of the papers in the dataset were published in a top 5 journal.
I classified papers into topics based on the presence of certain keywords in the abstract and title.* I chose the keywords carefully, aiming to both minimize the share of irrelevant articles and to minimize the omission of relevant ones. While there is certainly some measurement error, it should not bias the results. (Though readers should think of this as a “fun-level” analysis rather than a “rigorously peer-reviewed” analysis.)
I chose topics based on suggestions in response to an earlier Tweet of mine. To keep things manageable, I’m going to focus on a few topics at a time. To start off, I looked at agricultural economics (5.3% of articles in the dataset), field experiments (1.0% of articles), lab experiments (1.9% of articles), and education (1.8% of articles). I chose these to have some topic diversity and also because these topics were relatively easy to identify.** I then ran a simple OLS regression of a “top 5” indicator on each topic indicator (separately).***
The results are plotted in a graph below. Field experiments are much more likely to publish in a top 5 journal than in the other 134 journals (about 5 percentage points more likely!), while lab experiments are much less likely. Education doesn’t seem to be favored one way or the other, while agriculture is penalized about as much as field experiments are rewarded. Moral of the story: if you want to publish an ag paper in a top 5, make it a field experiment!
Now you might be saying, “I can’t even name 139 economics journals, so maybe this isn’t the relevant sample on which to run this regression.” Fair point (though see here for a way way longer list of econ journals). To address this, I restricted the set of journals to the 20 best-known general-interest journals—including the top 5—and re-generated the results.**** With the exception of lab experiments, the picture now looks quite different: both field experiments and education research are penalized by the top 5 journals, but agriculture is not.
Combining the two sets of results together, we can conclude that the top 5 penalize agricultural economics research but so do the other good general-interest journals. The top 5 journals also penalize field experiments relative to other good general-interest journals, but top general-interest journals as a whole rewards field experiments relative to other journals. Finally, top 5 journals penalize education relative to other good general-interest journals, but not relative to the field as a whole.
The second set of results is obviously sensitive to the set of journals considered. If I were to add field journals like the American Journal of Agricultural Economics, things would again look much worse for ag. And how much worse they look for a particular topic depends on how many articles the field journal publishes. So I prefer the most inclusive set of journals, but I welcome suggestions about which set of journals to use in future analyses! Would also love to hear everyone’s thoughts on this exercise in general, so please leave a comment.
*I did not use JEL codes because many journals do not require or publish these and we therefore do not collect them. JEL codes are also easier to select strategically than the words in the title and abstract.
** An article falls into the category of agricultural economics if it contains any of the following words/phrases in the abstract or title (not case-sensitive, partial word matches count): “farm”, “crop insurance”, “crop yield”, “cash crop”, “crop production”, “crops production”, “meat processing”, “dairy processing”, “grain market”, “crop management”, “agribusiness”, “beef”, “poultry”, “hog price”, “cattle industry”, “rice cultivation”, “wheat cultivation”, “grain cultivation”, “grain yield”, “crop diversity”, “soil conditions”, “dairy sector”, “hectare”, “sugar mill”, “corn seed”, “soybean seed”, “maize production”, “soil quality” “agricultural chemical use”, “forest”. Field experiment: “field experiment”, “experiment in the field”. Lab experiment: “lab experiment”, “laboratory experiment”, “experimental data”, “randomized subject”, “online experiment”. Education: “return to education”, “returns to education”, “college graduate”, “schooling complet”, “teacher”, “kindergarten”, “preschool”, “community college”, “academic achievement”, “academic performance”, “postsecondary”, “educational spending”, “student performance”, “student achievement”, “student outcome”, “student learning”, “higher education” “educational choice”, “student academic progress”, “public education”, “school facilit”, “education system”, “school voucher” “private school”, “school district”, “education intervention”. Articles may fall into multiple categories.
*** Standard errors are heteroskedasticity-robust
**** The 15 additional journals are (in alphabetical order): American Economic Journal: Applied Economics, American Economic Journal: Economic Policy, American Economic Journal: Macroeconomics, American Economic Journal: Microeconomics, American Economic Review: Insights, Economic Journal, Economic Policy, Economica, European Economic Review, Journal of the European Economic Association, Oxford Economic Papers, Quantitative Economics, RAND Journal of Economics, Review of Economics and Statistics, Scandinavian Journal of Economics.
How do we judge how good a journal is? Ideally by the quality of articles it publishes. But the best systematic way of quantifying quality we’ve come up with so far are citation-based rankings. And these are far from perfect, as a simple Google Search will reveal (here’s one such article).
I’ve been using Academic Sequitur data to experiment with an alternative way of ranking journals. The basic idea is to calculate what percent of authors who published in journal X have also published in a top journal for that discipline (journals can also be ranked relative to every other journal, but the result is more difficult to understand). As you might imagine, this ranking is also not perfect, but it has yielded very reasonable results in economics (see here).
Now it’s time to try this ranking out in a field outside my own: Political Science. As a reference point, I took 3 top political science journals: American Political Science Review (APSR), American Journal of Political Science (AJPS), and Journal of Politics (JOP). I then calculated what percent of authors who published in each of 20 other journals since 2018 have also published a top-3 article at any point since 2000.
Here are the top 10 journals, according to this ranking (the above-mentioned stat is in the first column).
Quarterly Journal of Political Science and International Organization come out as the top 2. This is noteworthy because alternative lists of top political science journals suggested to me included these two journals! Political Analysis is a close second, followed by a group of 5 journals with very similar percentages overall (suggesting similar quality).
Below is the next set of ten. Since this is not my research area, I’m hoping you can tell me in the comments whether these rankings are reasonable or not! Happy publishing.
Finally, here’s an excel version of the full table, in case you want to re-sort by another column. Note that if a journal is not listed, that means I did not rank it. Feel free to ask about other journals in the comments.
American Economic Review: Insights is a new journal by the American Economic Association. It’s intended to replace the short paper section of the AER, and the first issue will appear this summer. Naturally, I’ve had quite a few discussion with colleagues about its likely quality: will AER: Insights be a top-tier journal like the AER, somewhere in the middle of the pack, or a flop?
Obviously, many factors affect the success of a journal. But how it starts out surely matters. Publish some amazing articles and become the journal at the top of people’s minds when they think about where to submit, which will in turn make it easier to attract high-quality articles. Publish some questionable research, and risk aversion will kick in, prompting people to submit elsewhere first and leaving you mostly with articles that other journals decided against publishing.
So I again dove into the database of Academic Sequitur. We track forthcoming articles, so even though the first issue of AER: Insights has not been published yet, we have 26 to-be-published articles in our database (the first of which were added in November of 2018, by the way!). The question I asked was simple: what percent of authors whose work is scheduled to appear in AER: Insights have published a top 5 article any time since 2000?
The answer is a whopping 67% (61% if you exclude AER articles). 58% have published in the AER, 23% have published in Econometrica, 38% have published in QJE, and 39% have published in ReStud. The overall percentage is non-trivially higher than that of any other journal except for Journal of Economic Literature.
Perhaps these numbers are not surprising to you. In fact, it may very well be a strategy that AER: Insights is consciously employing to gain early traction. And these statistics could signal that it’s difficult to get published there unless you’re well-known, at least at this point (though we don’t know what the distribution of submissions looks like). But more information is better, and this certainly makes me more likely to try for AER: Insights in the future!
In a fascinating series of graphs, David Card and Stefano DellaVigna document that submissions to the top 5 economics journals have gone up from about 3,000 per year in the early 1990s to over 7,000 per year in the mid-2010s, while acceptance rates have decreased from 10-15% to 2.4-5.6%. A natural interpretation of these trends is that it has gotten much harder to publish in top journals, all else equal. But could there be more to this story? There are (at least) three reasons things are not as clear as they seem at first glance.
First, submitting a paper to a journal has certainly become easier since the 1990s. It used to be (or so I’m told) that you had to physically print out the paper, put it into an envelope, send it somewhere and wait for a return envelope, which meant that even desk rejections took a lot longer than they do now, on average. The whole process should have been inherently harder, which meant people may have thought twice before trying for a Hail Mary at a top journal. With more and more people having internet access and virtually all top journals accepting electronic submissions, I would predict that a lot more people are willing to shop their papers around for a while in hopes of hitting the best journal possible.
Second, the emphasis on having a top-5 publication seems to
have increased steadily over the past 20 years or so. Of course, there could be
reverse causality here – if publishing a paper in the top 5 is harder, then
success should be valued more, on average. But I think it’s likely that the
emphasis on the top 5 journals has caused more people to try to publish papers
there that they would otherwise send someone else, driving submissions up and
acceptance rates down. For example, authors may now try for three of the five
journals instead of just one (I may or may not have done this myself), so a
single paper will show up as three submissions in the statistics above.
Third, the graphs show that the number of authors per paper has gone up too, so conditional on the total number of published papers being constant, the total number of people with a top 5 publication should be going up!
But shouldn’t the first two trends mean it’s gotten harder to publish in top journals, as the original statistics imply? This is true only if the marginal papers are about as good as the average papers. And that is very unlikely to be the case. Given the high rates of desk rejections at many journals, is seems more likely that there are many people trying for a very unlikely outcome. In fact, it’s possible that the probability of a good paper getting into a top journal is essentially unchanged because each marginal paper has an incredibly tiny probability of acceptance. Of course, it’s hard to believe that the publishing process is that accurate, so it’s likely to have gotten at least somewhat more competitive at the top journals. But I highly doubt that the level of competition is anywhere close to 2-5 times higher, as the statistics imply.
The real question we should be asking is, are there more high-quality papers per high-quality journal (somehow controlling for general progress in methods, computing power, and data)? Or, if we’re interested in the likelihood of a person having a top-5 publication, are there more high-quality academics per high-quality journal? Both of these questions are really difficult to answer, but I found no statistics suggesting that there have been significant increases in the number of academic economists over this time period. For example, this website from 2006 says that “the number of new PhDs who intend to pursue careers in the US has declined.”. This report indicates that there were 24,886 AEA members and subscribers in 1999 and this one says there were 23,170 in 2017. My overall conclusion is that the decline in acceptance rates should not be interpreted as really bad news for good papers trying to get into good journals.