Who is publishing in AER: Insights? An update

Over a year ago, I wrote a post tabulating the share of AER: Insights authors who have also published in a top-5 journal*. (The answer was 67%, significantly higher than most other journals, except those that generally solicit papers, like the Journal of Economic Literature.)

Now that AER: Insights is in its second year of publishing and has 60 forthcoming/published articles, I decided to revisit this question, again using Academic Sequitur data. The graph below shows the percent of authors that (a) have published/are forthcoming in a given journal in 2018-2020 and (b) have had at least one top-5 article published since 2000. The journals below are the top ten journals based on that metric.

With a score of 66%, AER: Insights still has the highest share of top-5 authors among journals where submissions are not generally solicited.** The next-highest journal, Theoretical Economics, is five percentage points behind. (There is some indication that the share for AER: Insights is coming down: for articles accepted in 2020, the top-5 share was “only” 60%.)

What if we condition on having two or more top-5 publications? That actually causes AER: Insights to move up in the ranking, overtaking Brookings Papers on Economic Activity.

Whether this pattern exists because AER: Insights is extremely selective or because less-established scholars are reluctant to submit their work to a new-ish journal or for some other reason is impossible to know without submission data. But no matter how you look at it, the group currently publishing in AER: Insights is quite elite.




*Top 5 is defined as American Economic Review, Econometrica, Journal of Political Economy, Quarterly Journal of Economics, and Review of Economic Studies.

**AER: Insights would be even higher-ranked by this metric (#3) if we ignored top-5 publications in American Economic Review. Therefore, this pattern is not driven by the fact that both journals are published by the AEA.

Where do men and women economists publish?

We all know that economics is a male-dominated discipline on average. But how does the representation of women look across different journals? Armed with Academic Sequitur* article metadata (going back to around 2000), I determined the genders of 82% of all authors in the data and calculated the prevalence of male authors by journal for 50 top-ranked journals in economics.** To see how things have changed over time, I also repeated this exercise with articles that were published in 2018-2019.

Just to set some expectations: in the gender-matched dataset, 82% of author-article observations are male (80% when restricted to 2018-2019). So if a journal has, say, 75% male authors, it’s doing better than average. With that, here are the top 10 male-dominated journals, ranked by share of male authors over the entire data period.*** To be super-duper scientific, 95 percent confidence intervals are also shown, and I added a vertical line at 82.1% for easy benchmarking to the average.

So three of the top five journals (Econometrica, QJE, and ReStud) have also been the three most male-dominated journals, at least historically, with 90%, 89%, and 88% male authors, respectively. A fourth (Journal of Political Economy) also barely made the top ten, with 87% male authors. These numbers also illustrate that there’s not much difference between the #1 and #10 male-dominated journal.

Encouragingly, there are some improvements as well. The share of male authors in QJE was almost 9 percentage points lower in 2018-2019 compared to the whole sample period. JPE‘s share decreased by 7 percentage points, putting these journals in the top 5 most improved. If ranked based on 2018-2019 shares, Econometrica would be #6, ReStud would be #11, QJE would be #24, and JPE would be #28, just barely in the bottom half.

The Journal of Finance, by contrast, has taken a small but statistically significant step backwards, with a 3 percentage point increase in the share of male authors. If ranked by the 2018-2019 male ratio, it would be number 1.

Here are the least male-dominated journals (rank 41-50). Economics of Education Review and JHR are both about 66% male. Surprisingly, both applied AEJs are in the least male-dominated group (AEJ: Applied is 71% male; AEJ: Policy is 74%). This may be because they are newer, though it is worth noting that their overall average is below the 2018-2019 average of 80%.

Here’s the rest of the pack. First, here are journals ranked 31-40 on the male-dominated scale (i.e., next 10 least male-dominated), ordered by share male in the overall sample. AER and ReStat are in this group, with 80% and 81% male, respectively. Thus, AER has historically been an outlier among the top five on this dimension (using 2018-2019 shares, it would rank #19, right in the middle of the other top five journals).

Here’s rank 21-30, all in the low-to-mid 80’s.

And here’s rank 11-20. AER: Insights is 84% male. The other two AEJs are in this group, with males representing about 85% of all author-article observations.

These patterns do not necessarily reflect discrimination: the representation of women in a particular field will obviously make a difference here (as evidenced by the positions of macro and theory journals). I leave it up to you, the reader, to interpret the numbers.****

___________________________________________________________________________

* Academic Sequitur is a tool I developed to help researchers keep up with new literature. You tell us what you want to follow, we send you weekly (or daily!) emails with article abstracts matching your criteria.

** Close to 1.5 percent of the initial observations are dropped because only the initials of the author are available. About 16.5 percent of the observations cannot be mapped to a name for which the gender is known. This includes a lot of Chinese names, for which it is very difficult to determine gender, according to my brief internet research. Names which can be both male and female are assigned a gender based on the relative probability of the name being male.

*** Each observation in the sample is an article-author, so those who publish in a journal multiple times will contribute relatively more to its average. Each coefficient is from a journal-specific regression. Confidence intervals are based on heteroskedasticity-robust standard errors.

**** If you want the numbers underlying these graphs, you can download the csv file here.

What publishes in top-5 economics journals?

Part I: agricultural economics, lab experiments, field experiments & economics of education

Most of us have a sense that it is more difficult to get certain topics published in the top 5 economics journals (American Economic Review, Econometrica, Journal of Political Economy, Quarterly Journal of Economics, and Review of Economic Studies), but there is not much hard data on this. And if a particular topic appears infrequently in top journals, it may simply be because it’s a relatively rare topic overall.

To get more evidence on this issue, I used Academic Sequitur data, which covers the majority of widely-read journals in economics. The dataset I used contains articles from 139 economics journals and spans the years 2000-2019. On average, 6 percent of the papers in the dataset were published in a top 5 journal.

I classified papers into topics based on the presence of certain keywords in the abstract and title.* I chose the keywords carefully, aiming to both minimize the share of irrelevant articles and to minimize the omission of relevant ones. While there is certainly some measurement error, it should not bias the results. (Though readers should think of this as a “fun-level” analysis rather than a “rigorously peer-reviewed” analysis.)

I chose topics based on suggestions in response to an earlier Tweet of mine. To keep things manageable, I’m going to focus on a few topics at a time. To start off, I looked at agricultural economics (5.3% of articles in the dataset), field experiments (1.0% of articles), lab experiments (1.9% of articles), and education (1.8% of articles). I chose these to have some topic diversity and also because these topics were relatively easy to identify.** I then ran a simple OLS regression of a “top 5” indicator on each topic indicator (separately).***

The results are plotted in a graph below. Field experiments are much more likely to publish in a top 5 journal than in the other 134 journals (about 5 percentage points more likely!), while lab experiments are much less likely. Education doesn’t seem to be favored one way or the other, while agriculture is penalized about as much as field experiments are rewarded. Moral of the story: if you want to publish an ag paper in a top 5, make it a field experiment!

Now you might be saying, “I can’t even name 139 economics journals, so maybe this isn’t the relevant sample on which to run this regression.” Fair point (though see here for a way way longer list of econ journals). To address this, I restricted the set of journals to the 20 best-known general-interest journals—including the top 5—and re-generated the results.**** With the exception of lab experiments, the picture now looks quite different: both field experiments and education research are penalized by the top 5 journals, but agriculture is not.

Combining the two sets of results together, we can conclude that the top 5 penalize agricultural economics research but so do the other good general-interest journals. The top 5 journals also penalize field experiments relative to other good general-interest journals, but top general-interest journals as a whole rewards field experiments relative to other journals. Finally, top 5 journals penalize education relative to other good general-interest journals, but not relative to the field as a whole.

The second set of results is obviously sensitive to the set of journals considered. If I were to add field journals like the American Journal of Agricultural Economics, things would again look much worse for ag. And how much worse they look for a particular topic depends on how many articles the field journal publishes. So I prefer the most inclusive set of journals, but I welcome suggestions about which set of journals to use in future analyses! Would also love to hear everyone’s thoughts on this exercise in general, so please leave a comment.

——————————————————————————————————————–

Endnotes

*I did not use JEL codes because many journals do not require or publish these and we therefore do not collect them. JEL codes are also easier to select strategically than the words in the title and abstract.

** An article falls into the category of agricultural economics if it contains any of the following words/phrases in the abstract or title (not case-sensitive, partial word matches count): “farm”, “crop insurance”, “crop yield”, “cash crop”, “crop production”, “crops production”, “meat processing”, “dairy processing”, “grain market”, “crop management”, “agribusiness”, “beef”, “poultry”, “hog price”, “cattle industry”, “rice cultivation”, “wheat cultivation”, “grain cultivation”, “grain yield”, “crop diversity”, “soil conditions”, “dairy sector”, “hectare”, “sugar mill”, “corn seed”, “soybean seed”, “maize production”, “soil quality” “agricultural chemical use”, “forest”. Field experiment: “field experiment”, “experiment in the field”. Lab experiment: “lab experiment”, “laboratory experiment”, “experimental data”, “randomized subject”, “online experiment”. Education: “return to education”, “returns to education”, “college graduate”, “schooling complet”, “teacher”, “kindergarten”, “preschool”, “community college”, “academic achievement”, “academic performance”, “postsecondary”, “educational spending”, “student performance”, “student achievement”, “student outcome”, “student learning”, “higher education” “educational choice”, “student academic progress”, “public education”, “school facilit”, “education system”, “school voucher” “private school”, “school district”, “education intervention”. Articles may fall into multiple categories.

*** Standard errors are heteroskedasticity-robust

**** The 15 additional journals are (in alphabetical order): American Economic Journal: Applied Economics, American Economic Journal: Economic Policy, American Economic Journal: Macroeconomics, American Economic Journal: Microeconomics, American Economic Review: Insights, Economic Journal, Economic Policy, Economica, European Economic Review, Journal of the European Economic Association, Oxford Economic Papers, Quantitative Economics, RAND Journal of Economics, Review of Economics and Statistics, Scandinavian Journal of Economics.

How to write a good referee report

Given the centrality of peer review in academic publishing, it might astonish some to learn that peer review training is not a formal component of any PhD program. Academics largely learn how to do peer review by osmosis: through seeing reports written by their advisors and colleagues, through being on the receiving end of them, and through experience. The result is perhaps predictable: lots of disgruntled researchers and the formation of such groups as “Reviewer 2 must be stopped” on Facebook.

This post is my attempt to make the world a better place by giving some advice on peer review. I have written over 100 reports, and I would like to think I do a good and efficient job (then again, I also mostly learned through osmosis, so you be the judge). Some of my advice is based on a great paper by Berk, Harvey, and Hirshleifer: “How to Write an Effective Referee Report and Improve the Scientific Review Process” (Journal of Economic Perspectives, 2017).

  1. As a reviewer, your job is to decide whether the paper is publishable in its current form and what would make it publishable if it is not. This is a distinct role from that of a copyeditor, whose job is to scrutinize every word and sentence, or a coauthor, whose job is to improve the contribution and substance of the paper. A reviewer’s goal is not to improve the paper, but to evaluate it, even though in the process of evaluating it, he may make suggestions that improve it. Of course, it is difficult for people to completely separate their own opinions from objective facts, but the harder we strive to play the right role, the fairer and smoother the review process will be.
  2. Your explanation of the paper’s strengths and weaknesses is more important than your recommendation. Many of us agonize over whether to recommend rejection or revise-and-resubmit. But reviewers do not know how many other submissions the journal receives or what their quality is. Even if you think the paper is great, it may be rejected because there are many papers that are even better. And a mediocre paper may make the cut if the other submissions are inferior to it. So the biggest service you can do for the editor is to help her rank the paper against the other submissions she is handling. Thus, you should aim to explain to the editor of what’s most impressive about the paper and what is lacking. The recommendation itself is secondary. When I recommend a rejection, I use the letter to the editor to outline the issues that make the paper unpublishable (there are usually 1-3), and why I don’t think they can be fixed by the authors.
  3. In case of rejection, make it clear to the authors what the deal-breakers are. The most frustrating and confusing reports to get are ones that raise seemingly addressable issues but are accompanied by a rejection recommendation. It may seem easier to save the “worst” for the letter to the editor, but it will leave the authors trying to guess why exactly the paper was rejected. Anecdotally, the most likely conclusion they will come to is “The reviewer just didn’t like the paper and then looked for reasons to reject it”, which is how Reviewer 2 groups get formed. Of course, you should use professional and courteous language in your reports. But don’t hide your ultimate opinion about the paper from the authors.
  4. In case of a revise-and-resubmit, make it clear to the authors what the must-dos and nice-to-dos are. Point 1 does not mean you should avoid suggestions that wouldn’t make or break publication. Many of my papers were improved by suggestions that weren’t central to the revision (for example, a reviewer suggested a great title change once). So if you have a good idea for improving the paper, by all means share it with the authors. But keep in mind that they will have at least one or maybe two-three other reviewers to satisfy, and the “to do” list can quickly spiral out of control. Sometimes the editor will tell the authors which reviewer comments to address and which to ignore. But sometimes the editor will pass on the comments to the authors as is. By separating your comments into those you think are indispensable and those that are optional, you’ll be doing the authors a big favor.
  5. Don’t spend a lot of time on a paper that you’re sure you’re going to reject. This is perhaps the most controversial piece of advice (see this Tweet & subsequent discussions) because some authors view the review process as a “peer feedback” system. But it is not (see point 1). And, at least in economics, many of us are overwhelmed with review requests and editors sometimes have a hard time finding available reviewers. Treating the review process as “peer feedback” exacerbates this problem. If you think the authors’ basic premise is fundamentally flawed or the data are so problematic that no answer obtained from them would be credible, you should not feel obligated to give comments on other parts of the paper. This does not mean that you should not be thorough – there are few things more frustrating than a reviewer complaining about something that was explicitly addressed by the authors. But in such cases you do not need to give feedback on parts of the paper that did not affect your decision.

Finally, I’d like to wrap up with an outline of how I actually do the review. First, I print out a physical copy of the paper and read it, highlighting/underlining and making notes in the margins or on a piece of paper. Second, I write a summary of the paper in my own words (it is useful for the editor to get an objective summary of the paper, and the authors can make sure I was on the same page as them). Third, I go through my handwritten comments and type the most relevant ones up, elaborating as needed. Fourth, I number my comments (helpful for referencing them in later stages, if applicable), order them from most to least important, and separate the deal-breakers or must-dos from the nice-to-dos. Fifth, I highlight the deal breakers (if rejecting) or must-dos (if suggesting revisions) in the letter to the editor. Finally, regardless of my recommendation, I try to say something nice about the paper both in the editor letter and in the report. Regardless of its quality, most papers have something good about them, and authors might be just a tad happier if their hard work was acknowledged more often.

A new way of ranking journals 2.0 – journal connectedness

A few weeks ago, I proposed that one could rank journals based on what percent of a journal’s authors have also published in a top journal. I calculated this statistic for economics and for finance, using the top 5/top 3 journals as a reference point.

Of course, one does not have to give top journals such an out-sized influence. One beauty of this statistic is that it can be calculated for any pair of journals. That is, we can ask, what percent of authors that publish in journal X have also published in journal Y? This “journal connectedness” measure can also be used to infer quality. If you think journal X is good and you want to know whether Y or Z is better, you can see which of these two journals has a higher percentage of authors from X publishing there. Of course, with the additional flexibility of this ranking come more caveats. First, this metric is most relevant for comparing journals from the same field or general-interest journals. If X and Y are development journals and Z is a theory journal, then this metric will not be very informative. Additionally, it’s helpful to be sure that both Y and Z are worse than X. Otherwise, a low percentage in Z may just reflect more competition.

With those caveats out of the way, I again used Academic Sequitur‘s database and calculated this connectedness measure for 52 economics journals, using all articles since 2010. Posting the full matrix as data would be overkill (here’s a csv if you’re interested though), so I made a heat map. The square colors reflect what percent of authors that published in journal X have also published in journal Y. I omitted observations where X=Y to maximize the relevance of the scale.

A few interesting patterns emerge. First, the overall percentages are generally low, mostly under 10 percent. The median value in the plot above is 3 percent and the average is 4.3 percent, but only 361 out of 2,652 squares are <0.5 percent. That means that a typical journal’s authors’ articles are dispersed across other journals rather than concentrated in some other journal. This makes sense if the typical journal is very disciplinary or if there are many equal-quality journals (eyeballing the raw matrix, it seems like a bit of both is going on, but I’ll let you explore that for yourself).

There are some notable exceptions. For example, 41% of those who have published in JAERE have published in JEEM, 54% of those who published in Theoretical Economics have published in JET, and 35% of those who have published in Quantitative Economics have published in the Journal of Econometrics. These relationships are highly asymmetric: only 13% of those who have published in JEEM have published in JAERE, only 16% of those who have published in JET have published in Theoretical Economics, and only 4% of those who have published in the Journal of Econometrics have published in Quantitative Economics.

There is also another important statistic contained in this map: horizontal lines with many green and light blue squares indicate journals that people seem to be systematically attracted to across the board. And then there’s that green cluster at the bottom left, with some yellows thrown in. Which journals are these?

I had the benefit of knowing what the data looked like before I made these heat maps, so I deliberately assigned ids 1-5 to the top 5 journals (the rest are in alphabetical order). So one pattern this exercise reveals is that authors from across the board are flocking to the top 5s (an alternative interpretation is that people with top 5s are dominating other journals’ publications). And people who publish in a top 5 tend to publish in other top 5s – that’s the bottom left corner. In fact, if you omitted the top 5s, as the next graph does, the picture would look a lot less colorful.

But even without the top 5, we see some prominent light blue/green horizontal lines, indicating “attractive” journals. The most line-like of these are: Journal of Public Economics, Journal of the European Economics Association, Review of Economics and Statistics, Economics Letters, and JEBO. Although JEBO was a bit surprising to me, overall it looks like this giant correlation matrix can be used to identify good general-interest journals. By contrast, the AEJs don’t show the same general attractiveness.

Finally, this matrix illustrates why Academic Sequitur is so useful. Most authors’ articles are published in more than just a few journals. Thus, to really follow someone’s work, one needs to either constantly check their webpage/Google Scholar profile, go to lots of conferences, or subscribe to many journals’ ToCs and filter them for relevant articles. Some of these strategies are perfectly feasible if one wants to follow just a few people. But most of us can think of way more people than that whose work we’re interested in. Personally, I follow 132 authors (here’s a list if you’re interested), and I’m sure I’ll be continuing to add to this list. Without an information aggregator, this would be a daunting task, but Academic Sequitur makes it easy. Self-promotion over!

If you think of anything else that can be gleaned from this matrix, please comment.

Ranking finance journals

Last week, I tried out a new way of “ranking” economics journals, based on the percent of 2018-2019 authors who have also published in one of the top 5 economics journals anytime since 2000. This week, I decided to take a look at finance journals (political science is next in line, as well as some extensions and robustness checks for econ journals).

The top 3 finance journals are generally agreed to be Journal of Finance, Journal of Financial Economics, and Review of Financial Studies. How do other finance journals stack up against them according to this metric? For fun and fairness, I threw in the top 5 econ journals into the mix, as well as Management Science.

Here are the “top 10” journals according to this metric (not counting the reference top 3, of course). The first numerical column gives the percent of authors that published in the journal specified in the row in 2018-2019 who have also published an article in any of the top 3 finance journals at some point since 2000. The next three columns give journal-specific percentages.

Because this is not my field, I have less to say about the reasonableness of this ranking, but perhaps finance readers can comment on whether this lines up with their perception of quality. Compared to the econ rankings, the raw percentage differences between journals appear larger, at least at the very top. And the overall frequency of publishing in the top 3 is lower. Management Science makes the top 5, but the top econ journals do not (
JPE and ReStud do make the top 10). To me, this makes sense, since it’s pretty clear that this ranking picks up connectedness as well as quality. Anecdotally, finance departments seem to value Management Science and the top 5 econ journals no more and perhaps less than the top 3 finance journals.

Here are the rest of the journals I ranked (as before, if a journal is not on the list, it doesn’t mean it’s ranked lower, it means I didn’t rank it). Here, we can clearly see that not many people who publish in JF, JFE, and RFS publish in AER, QJE, or Econometrica.

If there’s another journal you’d like to see ranked in reference to the top 3 finance ones, please comment!

How good will AER: Insights be?

American Economic Review: Insights is a new journal by the American Economic Association. It’s intended to replace the short paper section of the AER, and the first issue will appear this summer. Naturally, I’ve had quite a few discussion with colleagues about its likely quality: will AER: Insights be a top-tier journal like the AER, somewhere in the middle of the pack, or a flop?

Obviously, many factors affect the success of a journal. But how it starts out surely matters. Publish some amazing articles and become the journal at the top of people’s minds when they think about where to submit, which will in turn make it easier to attract high-quality articles. Publish some questionable research, and risk aversion will kick in, prompting people to submit elsewhere first and leaving you mostly with articles that other journals decided against publishing.

So I again dove into the database of Academic Sequitur. We track forthcoming articles, so even though the first issue of AER: Insights has not been published yet, we have 26 to-be-published articles in our database (the first of which were added in November of 2018, by the way!). The question I asked was simple: what percent of authors whose work is scheduled to appear in AER: Insights have published a top 5 article any time since 2000?

The answer is a whopping 67% (61% if you exclude AER articles). 58% have published in the AER, 23% have published in Econometrica, 38% have published in QJE, and 39% have published in ReStud. The overall percentage is non-trivially higher than that of any other journal except for Journal of Economic Literature.

Perhaps these numbers are not surprising to you. In fact, it may very well be a strategy that AER: Insights is consciously employing to gain early traction. And these statistics could signal that it’s difficult to get published there unless you’re well-known, at least at this point (though we don’t know what the distribution of submissions looks like). But more information is better, and this certainly makes me more likely to try for AER: Insights in the future!

How to be a productive researcher

We are taught a lot of research skills in grad school. But a lot of these are specific technical skills. Little attention is devoted to the question of how to be a productive researcher. This “meta-skill” is usually learned the hard way through trial and error or, in the best-case scenario, through others’ advice. Here are my two cents on what works.

  1. Treat the research process as a skill you have to learn and maintain. No one is born knowing how to take a project from an idea to a published paper; some people just figure it out more quickly than others. And the more you practice, the easier it gets. Having the right attitude about this process can help you calibrate expectations and muster willingness to persevere.
  2. Protect your research time. Figure out when you work best (e.g., mornings or evenings) and minimize other commitments during those times. In my calendar, 8am-11am is marked as “research time” every single weekday. That reminds me not to schedule other things during that time. To avoid having to respond to requests with “I’m sorry, that time doesn’t work for me, I’ll be sitting in my office doing research,” I will often take the first step in suggesting an afternoon meeting time. This doesn’t always work – for example, I taught 9:30-11am Tue/Thu last semester and some morning meetings are unavoidable – but it greatly improves my productivity overall. Remember, the fact that your plan to do research at a particular time does not involve another person does not mean that it is not a “real” commitment. In fact, your job (mostly) depends on it.
  3. Invest in your writing skills. Writing used to be difficult, and I would dread it. Nevertheless, I persevered and now writing is much easier and more enjoyable. Here are some specific suggestions.
    • Make an effort to write every day, during the time when your brain and body are at their best. For me, this is the morning.
    • Allow yourself to write “s&*^ty first drafts.” Do not try to spit out the perfect word/sentence/paragraph on the first try. Write freely, edit later.
    • Do not start out trying to write for hours at a time. If you are not used to writing regularly, aim for 30 minutes or even just 10 minutes. If you write for 10 minutes a day, that is almost an hour of writing per week. If you do 30 minutes a day, that adds up to 2.5 hours! The Pomodoro technique can be very helpful here.
    • Join a writing group. For about two years, I did Academic Writing Club, an online group where professors or grad students from related disciplines are joined by a “coach”, create weekly goals for themselves, and check in daily with their progress. It is not free, but in my opinion worth it (and you can probably use your research budget). If you are looking for a free writing group, look for people around your university who are willing to get together and write!
  4. Prioritize projects based on how close they are to publication. (Obviously your coauthors’ preferences and constraints matter here, so this is a general guideline). Specifically, this should be the order of your priorities, if not on a daily, then at least on a weekly level:
    • Proofs of accepted papers that need to be turned around to the publisher. [When I first heard this suggestion, my reaction was, “I don’t have any proofs!” If that is the case, don’t worry, you will get there.]
    • Revise-and-resubmits.
    • Working papers that are closest to submission, whether these are brand new ones or rejected ones looking for a new home.
    • Projects that are closest to becoming working papers (e.g., ones where the analysis is complete).
    • Projects where you are analyzing the data (working with a model, if you’re a theorist).
    • Projects that are newer than everything above.

5. Try to avoid being the bottleneck. If someone is waiting for you to do something on a project before she or he can work on it, try to prioritize that task. Obviously, one reason for this is that your coauthors may be annoyed if you take too long to do something you promised to do. But another (possibly more important) reason is that by not being the bottleneck, you can boost your annual productivity by having your coauthors (or research assistants) do their work faster.

When to give up on a paper

Following the publication of the post on where to submit your paper, someone asked, “How do you know when it’s time to give up on a paper?”

This is a really hard question. We put a lot of work into our papers (I’m assuming in this post that it is a completed paper) and, despite the theoretical wisdom of “Ignore sunk costs”, it’s difficult to let go of months or years of hard work no matter how bleak things look. But there’s also no magic number of rejections beyond which it’s clear that you should just give up. Here are my two cents on how to make the decision.

First, here’s a clever trick I use to make “giving up” on a paper easier psychologically – I have never permanently given up on a paper. But I do have four papers and a lot of never-made-it-to-paper-stage-projects “on the back burner”. I haven’t worked on them for years and don’t plan on doing so unless I have nothing better to do. In other words, instead of asking the hard question of “Should I never try to publish this paper again?”, ask the easier question of “Should I prioritize other projects over this paper for now?” I always have the option to pull papers out of the “back burner” folder, but lo and behold, I keep having better projects to work on and don’t think much about the archived ones.

Of course, that still leaves the question of “Should I prioritize other projects over this paper?” open. I’ll discuss three related cases where this question becomes relevant and offer some general guidance for how to decide.

#1 Your paper has gotten rejected multiple (let’s say at least five) times for roughly the same reason, you don’t think you can do anything to address that shortcoming, and you have other, more promising, projects/ideas. If that reason is “this paper isn’t making enough of a contribution” AND you’ve revised your introduction substantially in between submissions to make the best possible case for your contribution, this may be a sign that it’s time to drop down a tier (though see some discussion below on when this is a good idea). At the same time, the contribution of a paper is hugely subjective. If the only thing reviewers find wrong with your paper is the contribution, then trying another journal within the same tier is fairly low-cost, assuming your contribution is actually within the realm of what gets published by the tier of journals you’ve been submitting to. Here, talking senior colleagues is especially helpful.

If the reason your paper keeps getting rejected is something related to the paper’s data/methodology – for example, no one believes your instrument, no matter how many robustness or placebo tests you’ve added – then dropping down a tier is also an option, but is less likely to be a successful strategy. I came close to giving up on a paper because no one seemed to like the IV. I ultimately decided to keep trying though because (a) a lot of the rejections were desk rejections, allowing me to re-submit without revising (since there was no real feedback given) and (b) I believed in the instrument myself and thought we made a good case for it. After six rejections, the paper was published.

By contrast, if your paper is getting rejected for diverse reasons, it is probably good to keep trying (though in that case I would recommend taking a close look at the writing to make sure your exposition is clear).

#2 You feel that your paper would only be publishable if you dropped to a tier of journals where your current colleagues generally don’t publish, you have other, more promising, projects/ideas. (Presumably, you think you need to drop down a tier because of numerous rejections. Otherwise, perhaps you are underestimating your paper!) For better or worse, publishing in a journal that your department really looks down on is sometimes viewed as a negative. So, if you otherwise have a good chance of getting tenure at your department (and want to get tenure at your department), you may want to put the project down and move on to something else. Two of my archived papers were archived for this reason.

#3 It looks like the path to publication in an acceptable-tier journal would be painful and you have other, more promising, projects/ideas. Maybe your case is not as extreme as the two cases above: you’ve had 3-4 rejections, you feel like you may have a shot at an acceptable but not stellar journal tier but, given the feedback you’ve gotten so far, you have a gut feeling that it would be painful for various reasons. Maybe a ref said the paper is not well-written and after taking a close look, you realize that the ref is right and that the whole paper needs an overhaul (I speak from experience). Maybe you have your own misgivings about the methodology/data and feel like an overhaul there is warranted. If you have other great projects in the pipeline with a lower cost-benefit ratio, by all means feel free to prioritize them. No one said you have to publish every paper you write.

Yes, I put “you have other, more promising, projects/ideas” in every entry on purpose. If you don’t have any other projects or ideas that have a reasonable shot at publishing at the same tier or higher than what you’ve been submitting to, then keep working on publishing the paper, even if it means a major overhaul. Use the suggestions I wrote about in a previous post on what to do after a rejection. While you wait for reviews, work on new projects and ideas and if a better one comes along and your submission gets rejected, by all means abandon the project.

A final word of caution is in order. According to my scientifically constructed chart below, our level of excitement about a project is always highest at the idea stage, when the promise seems unlimited and the pitfalls and barriers to getting there are not salient. So, if you find yourself constantly putting completed papers on the back burner and picking up new shiny ideas, stop! Go back to your best completed paper and publish it (and work on the shiny new ideas while you wait for reviews). Then repeat until you have a few publications.

What to do after a rejection

Benjamin Franklin wrote, “in this world nothing can be said to be certain, except death and taxes” (though the earliest origin of that idea dates to Christopher Bullock in 1716, apparently). Most academics would agree that paper rejections also belong on that list. My 10 published papers have been rejected a total of 29 times. I also have two “archived” papers that were collectively rejected eight times before I gave up on them and four working papers that so far have been rejected seven times (two are now revise-and-resubmit, so the rate of rejection is decreasing). So I have a total of 44 rejections. I have ZERO papers that got a revise-and-resubmit at the first journal I submitted them to (= each of my papers has been rejected at least once). I’m not even counting conference and grant rejections.

Paper rejections come in many shapes and sizes: your run-of-the-mill “Nice paper, but not enough of a contribution for this journal” or “Too many little things wrong” rejections; a reviewer finding something genuinely wrong with your manuscript; boilerplate desk rejections; a half-page report from a lazy reviewer who clearly hasn’t read your paper; and the frustrating “I just don’t believe your results” rejection. Rejections don’t feel good, but given that they are inevitable, it’s important to learn how to deal with them and move past them as quickly as possible. Below, I provide some suggestions that have worked for me.

First, allow yourself to take a few days to “mourn” the decision. A few days of inaction after a rejection won’t make much of a difference. I typically don’t even read the referee reports closely until it’s been a few days because I’m not confident in my ability to take in the feedback objectively. By all means, trash-talk the referees to your colleagues (people at your institution almost surely won’t be asked to review your papers), join the “Reviewer 2 must be stopped” group on Facebook (especially if you don’t know what “Reviewer 2” refers to), have a drink or two (please drink responsibly), do some exercise, work on another paper, or binge-watch that show you’ve been waiting to see. Do be careful how you discuss your reports online or at conferences because you never know who your reviewers were or who might know who your reviewers were.

It is hard not to take rejections personally, but in the vast majority of cases, they are not. The reviewers rejected your paper, they did not reject you as a person or a researcher. Even the comments about your paper may not have anything to do with the quality of your paper. Some reviewers might strongly dislike a particular methodology or research area, others may have had a bad day or week, and some may operate in toxic environments where unnecessary harshness is disguised as “honesty”. Your reviewer may have been a graduate student doing a referee report for the first time or a senior professor drowning in service work. Almost everyone has a “Reviewer 2” story, including some of the best researchers, and you are not alone. If a reviewer seemed particularly unfair, talk to a senior colleague about appealing the decision. However, appeals are definitely not the standard way to deal with rejections.

Next comes the time for actual work. Unless the journal rejecting your paper was your last stop before you were going to abandon efforts to publish it, try to return to the reports within a week of the rejection and look at them objectively. It can be tempting to either (1) ignore the reports completely and send the paper back out as soon as possible or (2) treat the reports as a revise-and-resubmit and try to address all the reviewer’s comments. Neither approach is generally a good idea, for two reasons.

First, you may get the same reviewer again. In some fields, reviewing the same paper twice is not acceptable, so you may get a different draw in that case. But in economics and surely some other fields, it’s not uncommon for the same person to review the paper two or more times at different journals. In such a case, the best you can hope for if you didn’t change anything in your paper is that the reviewer will return the same report to the editor. But it’s also possible that the reviewer will be annoyed that you did not take into account any of the comments they worked hard to give you and treat your paper more harshly than the first time around. In short, you want to avoid giving the impression that you thought the comments so worthless that you did not address even one.

Second, even if you’re 100% sure you’re not going to get the same reviewer, it’s highly unlikely that the reviewers’ comments were completely idiosyncratic or idiotic. If you ignore a comment that you could have addressed and a subsequent reviewer has the same concern, your paper could end up rejected again for avoidable reasons. Despite all the “Reviewer 2” stories out there, I think the overall peer review process is far from completely broken, so it’s also very unlikely that all the comments are useless and wrong. In short, the best way to treat the reviewer reports following a rejection is as an opportunity to make your paper better.

When deciding whether to address a particular comment, I ask myself two things: (1) How likely is this comment to come up again? and (2) How easy is this for me to address? The higher the comment is on this two-dimensional likelihood-ease scale, the more you should jump at the chance to address it. Whether something is likely to come up again or not is the hardest question to answer. Here, thinking about comments you’ve gotten at conferences or asking colleagues for their feedback on a particular comment can be really helpful. Rigorous self-honesty helps too: with some introspection, most of us will be able to identify comments where the reviewer really does have a point. Once you’ve identified all the relevant comments, start addressing them one by one. Where to stop can be difficult to tell, but if you start with the comments that rank high on ease and/or likelihood, you can stop at any point with the knowledge that you’ve addressed the most important ones. For me, a good rule of thumb is that the paper should be ready to go back out within 1-3 months or less of not-full-time work (this is probably equivalent to about 1-3 weeks of full-time for me). Anything more than that is likely to be excessive in most circumstances.

I’ll wrap up with two specific suggestions. If a reviewer comment makes it seem like she or he misunderstood something about what you’re doing, try to see if you can make that part of the paper clearer. You have the privilege of knowing your paper better than anyone else, so what seems clear to you may not be to the average reader. If there is a comment that seems likely to come up again but would be really difficult to address, you have a few options. You can add a brief explanation as to why doing X would be difficult, possibly as a footnote, possibly as a suggested avenue for future research. This signals to reviewers that you are aware of X. Relatedly, you can hint that you could do X but it’s outside of the scope of the current paper. That allows a persistent reviewer to insist on seeing X in a revision but reduces the likelihood that they reject the paper because you didn’t already do X.

In the end, these steps don’t necessarily make rejections more pleasant, but they do move your paper closer to published!