Some statistics about female authors in academia

Today, I again used data from the literature tracking tool Academic Sequitur, this time to examine some gender patterns in publishing across fields. I took article data from 2018-2020 and estimated the share of female authorships for 38 different research fields, as determined by the field of each journal.* I excluded names that could not be classified as female or male; thus, the share female and share male add up to 1 in each case.

What are the most male-dominated fields? Mathematics barely clears 20 percent female authors, with computer science and finance close behind (or ahead?). Economics just makes it over the 25 percent hurdle and has fewer female authors than engineering. Business does slightly better, with 32 percent female authors. Archeology rounds out this group with just under 40 percent women.

The bottom half of the male-dominated scale has many fields with that are right around 40 percent female, including urban studies, neuroscience, epidemiology, health policy and pharmacology. Finally, three fields have a greater than 50-50 female representation: demography (60.0 percent female), social work (65.7 percent women), and gender studies (66.0 percent female).

Although a few research fields were excluded from this analysis for conciseness, it’s pretty clear that gender parity has a long way to go in academia in the vast majority of fields, even if we look at the most recent data.

* A journal may belong to more than one field. Highly multidisciplinary journals, such as Nature, Science, and PNAS, were excluded from the sample.

Who is publishing in AER: Insights? An update

Over a year ago, I wrote a post tabulating the share of AER: Insights authors who have also published in a top-5 journal*. (The answer was 67%, significantly higher than most other journals, except those that generally solicit papers, like the Journal of Economic Literature.)

Now that AER: Insights is in its second year of publishing and has 60 forthcoming/published articles, I decided to revisit this question, again using Academic Sequitur data. The graph below shows the percent of authors that (a) have published/are forthcoming in a given journal in 2018-2020 and (b) have had at least one top-5 article published since 2000. The journals below are the top ten journals based on that metric.

With a score of 66%, AER: Insights still has the highest share of top-5 authors among journals where submissions are not generally solicited.** The next-highest journal, Theoretical Economics, is five percentage points behind. (There is some indication that the share for AER: Insights is coming down: for articles accepted in 2020, the top-5 share was “only” 60%.)

What if we condition on having two or more top-5 publications? That actually causes AER: Insights to move up in the ranking, overtaking Brookings Papers on Economic Activity.

Whether this pattern exists because AER: Insights is extremely selective or because less-established scholars are reluctant to submit their work to a new-ish journal or for some other reason is impossible to know without submission data. But no matter how you look at it, the group currently publishing in AER: Insights is quite elite.




*Top 5 is defined as American Economic Review, Econometrica, Journal of Political Economy, Quarterly Journal of Economics, and Review of Economic Studies.

**AER: Insights would be even higher-ranked by this metric (#3) if we ignored top-5 publications in American Economic Review. Therefore, this pattern is not driven by the fact that both journals are published by the AEA.

How to write a good referee report

Given the centrality of peer review in academic publishing, it might astonish some to learn that peer review training is not a formal component of any PhD program. Academics largely learn how to do peer review by osmosis: through seeing reports written by their advisors and colleagues, through being on the receiving end of them, and through experience. The result is perhaps predictable: lots of disgruntled researchers and the formation of such groups as “Reviewer 2 must be stopped” on Facebook.

This post is my attempt to make the world a better place by giving some advice on peer review. I have written over 100 reports, and I would like to think I do a good and efficient job (then again, I also mostly learned through osmosis, so you be the judge). Some of my advice is based on a great paper by Berk, Harvey, and Hirshleifer: “How to Write an Effective Referee Report and Improve the Scientific Review Process” (Journal of Economic Perspectives, 2017).

  1. As a reviewer, your job is to decide whether the paper is publishable in its current form and what would make it publishable if it is not. This is a distinct role from that of a copyeditor, whose job is to scrutinize every word and sentence, or a coauthor, whose job is to improve the contribution and substance of the paper. A reviewer’s goal is not to improve the paper, but to evaluate it, even though in the process of evaluating it, he may make suggestions that improve it. Of course, it is difficult for people to completely separate their own opinions from objective facts, but the harder we strive to play the right role, the fairer and smoother the review process will be.
  2. Your explanation of the paper’s strengths and weaknesses is more important than your recommendation. Many of us agonize over whether to recommend rejection or revise-and-resubmit. But reviewers do not know how many other submissions the journal receives or what their quality is. Even if you think the paper is great, it may be rejected because there are many papers that are even better. And a mediocre paper may make the cut if the other submissions are inferior to it. So the biggest service you can do for the editor is to help her rank the paper against the other submissions she is handling. Thus, you should aim to explain to the editor of what’s most impressive about the paper and what is lacking. The recommendation itself is secondary. When I recommend a rejection, I use the letter to the editor to outline the issues that make the paper unpublishable (there are usually 1-3), and why I don’t think they can be fixed by the authors.
  3. In case of rejection, make it clear to the authors what the deal-breakers are. The most frustrating and confusing reports to get are ones that raise seemingly addressable issues but are accompanied by a rejection recommendation. It may seem easier to save the “worst” for the letter to the editor, but it will leave the authors trying to guess why exactly the paper was rejected. Anecdotally, the most likely conclusion they will come to is “The reviewer just didn’t like the paper and then looked for reasons to reject it”, which is how Reviewer 2 groups get formed. Of course, you should use professional and courteous language in your reports. But don’t hide your ultimate opinion about the paper from the authors.
  4. In case of a revise-and-resubmit, make it clear to the authors what the must-dos and nice-to-dos are. Point 1 does not mean you should avoid suggestions that wouldn’t make or break publication. Many of my papers were improved by suggestions that weren’t central to the revision (for example, a reviewer suggested a great title change once). So if you have a good idea for improving the paper, by all means share it with the authors. But keep in mind that they will have at least one or maybe two-three other reviewers to satisfy, and the “to do” list can quickly spiral out of control. Sometimes the editor will tell the authors which reviewer comments to address and which to ignore. But sometimes the editor will pass on the comments to the authors as is. By separating your comments into those you think are indispensable and those that are optional, you’ll be doing the authors a big favor.
  5. Don’t spend a lot of time on a paper that you’re sure you’re going to reject. This is perhaps the most controversial piece of advice (see this Tweet & subsequent discussions) because some authors view the review process as a “peer feedback” system. But it is not (see point 1). And, at least in economics, many of us are overwhelmed with review requests and editors sometimes have a hard time finding available reviewers. Treating the review process as “peer feedback” exacerbates this problem. If you think the authors’ basic premise is fundamentally flawed or the data are so problematic that no answer obtained from them would be credible, you should not feel obligated to give comments on other parts of the paper. This does not mean that you should not be thorough – there are few things more frustrating than a reviewer complaining about something that was explicitly addressed by the authors. But in such cases you do not need to give feedback on parts of the paper that did not affect your decision.

Finally, I’d like to wrap up with an outline of how I actually do the review. First, I print out a physical copy of the paper and read it, highlighting/underlining and making notes in the margins or on a piece of paper. Second, I write a summary of the paper in my own words (it is useful for the editor to get an objective summary of the paper, and the authors can make sure I was on the same page as them). Third, I go through my handwritten comments and type the most relevant ones up, elaborating as needed. Fourth, I number my comments (helpful for referencing them in later stages, if applicable), order them from most to least important, and separate the deal-breakers or must-dos from the nice-to-dos. Fifth, I highlight the deal breakers (if rejecting) or must-dos (if suggesting revisions) in the letter to the editor. Finally, regardless of my recommendation, I try to say something nice about the paper both in the editor letter and in the report. Regardless of its quality, most papers have something good about them, and authors might be just a tad happier if their hard work was acknowledged more often.

Political Science Journal Rankings

How do we judge how good a journal is? Ideally by the quality of articles it publishes. But the best systematic way of quantifying quality we’ve come up with so far are citation-based rankings. And these are far from perfect, as a simple Google Search will reveal (here’s one such article).

I’ve been using Academic Sequitur data to experiment with an alternative way of ranking journals. The basic idea is to calculate what percent of authors who published in journal X have also published in a top journal for that discipline (journals can also be ranked relative to every other journal, but the result is more difficult to understand). As you might imagine, this ranking is also not perfect, but it has yielded very reasonable results in economics (see here).

Now it’s time to try this ranking out in a field outside my own: Political Science. As a reference point, I took 3 top political science journals: American Political Science Review (APSR), American Journal of Political Science (AJPS), and Journal of Politics (JOP). I then calculated what percent of authors who published in each of 20 other journals since 2018 have also published a top-3 article at any point since 2000.

Here are the top 10 journals, according to this ranking (the above-mentioned stat is in the first column).


Quarterly Journal of Political Science and International Organization come out as the top 2. This is noteworthy because alternative lists of top political science journals suggested to me included these two journals! Political Analysis is a close second, followed by a group of 5 journals with very similar percentages overall (suggesting similar quality).

Below is the next set of ten. Since this is not my research area, I’m hoping you can tell me in the comments whether these rankings are reasonable or not! Happy publishing.

Finally, here’s an excel version of the full table, in case you want to re-sort by another column. Note that if a journal is not listed, that means I did not rank it. Feel free to ask about other journals in the comments.