How to write a good referee report

Given the centrality of peer review in academic publishing, it might astonish some to learn that peer review training is not a formal component of any PhD program. Academics largely learn how to do peer review by osmosis: through seeing reports written by their advisors and colleagues, through being on the receiving end of them, and through experience. The result is perhaps predictable: lots of disgruntled researchers and the formation of such groups as “Reviewer 2 must be stopped” on Facebook.

This post is my attempt to make the world a better place by giving some advice on peer review. I have written over 100 reports, and I would like to think I do a good and efficient job (then again, I also mostly learned through osmosis, so you be the judge). Some of my advice is based on a great paper by Berk, Harvey, and Hirshleifer: “How to Write an Effective Referee Report and Improve the Scientific Review Process” (Journal of Economic Perspectives, 2017).

  1. As a reviewer, your job is to decide whether the paper is publishable in its current form and what would make it publishable if it is not. This is a distinct role from that of a copyeditor, whose job is to scrutinize every word and sentence, or a coauthor, whose job is to improve the contribution and substance of the paper. A reviewer’s goal is not to improve the paper, but to evaluate it, even though in the process of evaluating it, he may make suggestions that improve it. Of course, it is difficult for people to completely separate their own opinions from objective facts, but the harder we strive to play the right role, the fairer and smoother the review process will be.
  2. Your explanation of the paper’s strengths and weaknesses is more important than your recommendation. Many of us agonize over whether to recommend rejection or revise-and-resubmit. But reviewers do not know how many other submissions the journal receives or what their quality is. Even if you think the paper is great, it may be rejected because there are many papers that are even better. And a mediocre paper may make the cut if the other submissions are inferior to it. So the biggest service you can do for the editor is to help her rank the paper against the other submissions she is handling. Thus, you should aim to explain to the editor of what’s most impressive about the paper and what is lacking. The recommendation itself is secondary. When I recommend a rejection, I use the letter to the editor to outline the issues that make the paper unpublishable (there are usually 1-3), and why I don’t think they can be fixed by the authors.
  3. In case of rejection, make it clear to the authors what the deal-breakers are. The most frustrating and confusing reports to get are ones that raise seemingly addressable issues but are accompanied by a rejection recommendation. It may seem easier to save the “worst” for the letter to the editor, but it will leave the authors trying to guess why exactly the paper was rejected. Anecdotally, the most likely conclusion they will come to is “The reviewer just didn’t like the paper and then looked for reasons to reject it”, which is how Reviewer 2 groups get formed. Of course, you should use professional and courteous language in your reports. But don’t hide your ultimate opinion about the paper from the authors.
  4. In case of a revise-and-resubmit, make it clear to the authors what the must-dos and nice-to-dos are. Point 1 does not mean you should avoid suggestions that wouldn’t make or break publication. Many of my papers were improved by suggestions that weren’t central to the revision (for example, a reviewer suggested a great title change once). So if you have a good idea for improving the paper, by all means share it with the authors. But keep in mind that they will have at least one or maybe two-three other reviewers to satisfy, and the “to do” list can quickly spiral out of control. Sometimes the editor will tell the authors which reviewer comments to address and which to ignore. But sometimes the editor will pass on the comments to the authors as is. By separating your comments into those you think are indispensable and those that are optional, you’ll be doing the authors a big favor.
  5. Don’t spend a lot of time on a paper that you’re sure you’re going to reject. This is perhaps the most controversial piece of advice (see this Tweet & subsequent discussions) because some authors view the review process as a “peer feedback” system. But it is not (see point 1). And, at least in economics, many of us are overwhelmed with review requests and editors sometimes have a hard time finding available reviewers. Treating the review process as “peer feedback” exacerbates this problem. If you think the authors’ basic premise is fundamentally flawed or the data are so problematic that no answer obtained from them would be credible, you should not feel obligated to give comments on other parts of the paper. This does not mean that you should not be thorough – there are few things more frustrating than a reviewer complaining about something that was explicitly addressed by the authors. But in such cases you do not need to give feedback on parts of the paper that did not affect your decision.

Finally, I’d like to wrap up with an outline of how I actually do the review. First, I print out a physical copy of the paper and read it, highlighting/underlining and making notes in the margins or on a piece of paper. Second, I write a summary of the paper in my own words (it is useful for the editor to get an objective summary of the paper, and the authors can make sure I was on the same page as them). Third, I go through my handwritten comments and type the most relevant ones up, elaborating as needed. Fourth, I number my comments (helpful for referencing them in later stages, if applicable), order them from most to least important, and separate the deal-breakers or must-dos from the nice-to-dos. Fifth, I highlight the deal breakers (if rejecting) or must-dos (if suggesting revisions) in the letter to the editor. Finally, regardless of my recommendation, I try to say something nice about the paper both in the editor letter and in the report. Regardless of its quality, most papers have something good about them, and authors might be just a tad happier if their hard work was acknowledged more often.

Should you get a PhD?

When I asked my undergraduate advisor for a recommendation letter to PhD programs, he replied, with genuine surprise, “Why do you want to get a PhD?” I was too stunned by his question to ask him to elaborate. In my mind, why wouldn’t you get a PhD? You get to learn more about what you’re interested in, you usually get enough money to support yourself, and aren’t more educated people more employable in general?  

Since then, I have come to appreciate my advisor’s reluctance to unequivocally endorse PhDs. (For the record, I don’t think his reaction had anything to do with his opinion of me – he did write me a letter. Also for the record, I do not regret getting a PhD!). “A PhD is an expensive degree” is something I cannot say often enough. Yes, usually you do not pay for the degree directly, but the earnings and quality of life you give up for 4-6+ years are similarly important costs to consider. With that, here are important questions to ask yourself before enrolling in a PhD program.

1) Do you like doing research in the field you are considering? Most PhD programs are geared toward training researchers. And academic research is very different from research you may have done in a class. Class research projects are doable and have an easily identifiable end (aka the due date). Academic research can be incredibly fulfilling because you get to be at the frontier of knowledge. But it is also unpredictable, uncertain, and (usually) involves unforeseen and frustrating setbacks. In the course of a research project, you may discover something incredible or you may end up discarding months of hard work and starting over.

An important emphasis here is on “doing research”. I love reading about new discoveries in genetics and wanted to be a researcher in genetics when I was in high school. But then I learned more about how research in genetics is conducted and realized that being a consumer of research and a producer of research are two very different things. That example also highlights that just because you do not enjoy research in one discipline does not mean that there isn’t another discipline out there for you.

The best way to figure out if you like academic research is to work as a research/lab assistant for a professor or scientist. You will probably end up doing bottom-of-the-barrel work, but you will observe and experience how the process works, which will give you a pretty good idea of whether research is for you.

How do you find research opportunities? Your undergraduate institution may have formal programs. But it’s also perfectly fine to email professors directly and ask if they have research opportunities. You may not get a ton of responses, but you only need one. Looking for a full-time research assistant position is another good option. Finally, you can try research yourself by writing a senior thesis or independent study under the supervision of a professor.

2) How much will your career depend on successfully getting grants? I was happily oblivious to the fact that many academics’ careers live and die by whether they successfully raise money to support their research and their students. Luckily for me, in economics fundraising is optional. But in many other disciplines, applying for grants is an incredibly important part of a researcher’s career. Constantly trying to get new grants to keep your research agenda and students funded can be very stressful. Knowing what kind of funding pressures you might face is important information to incorporate into your decision.

3) What would you do if you didn’t get a PhD and where would that get you in 4-6 years? I always ask students who want to apply to a PhD program why they want a PhD (not with the surprised tone my undergrad advisor used though). About half the time, the answer makes it clear that the biggest reason is that they aren’t sure what to do next, so getting more education seems like a safe fallback option. If that describes you, spend some time researching other career options. And don’t just consider what entry-level jobs you could get instead of a PhD. Remember, a PhD is a big time commitment, so you should be comparing getting a PhD to spending 4-6 years working. Often, those years of work experience can get you far, both financially and in terms of doing interesting work.

4) How difficult is it to get a faculty/researcher position after the PhD program? If you have tried research and loved it and are satisfied with the grant funding situation in your chosen field, there’s still the harsh reality that, in many disciplines, only a small fraction of PhDs end up getting academic/researcher positions. Quite a few end up working in positions that are only tangentially related (Data Science and Finance are popular destination for math and physics PhDs).

The answer to this question obviously varies by institution, and there are no programs that can guarantee a research-based job after. If whether or not you get a PhD hinges on the answer to this question, I suggest applying, seeing where you get in, and then asking those programs about their placement records. If you don’t get into a program with a placement record that satisfies you, working for a year or two, beefing up your credentials, and trying again may be a good option.

I don’t mean to sound too negative about PhD programs. For many, including myself, the intellectual satisfaction of research and the ability to set your own course more than offsets the costs of a PhD program. But I think the world would be a better place if most prospective PhDs knew what exactly they were getting themselves into!

Political Science Journal Rankings

How do we judge how good a journal is? Ideally by the quality of articles it publishes. But the best systematic way of quantifying quality we’ve come up with so far are citation-based rankings. And these are far from perfect, as a simple Google Search will reveal (here’s one such article).

I’ve been using Academic Sequitur data to experiment with an alternative way of ranking journals. The basic idea is to calculate what percent of authors who published in journal X have also published in a top journal for that discipline (journals can also be ranked relative to every other journal, but the result is more difficult to understand). As you might imagine, this ranking is also not perfect, but it has yielded very reasonable results in economics (see here).

Now it’s time to try this ranking out in a field outside my own: Political Science. As a reference point, I took 3 top political science journals: American Political Science Review (APSR), American Journal of Political Science (AJPS), and Journal of Politics (JOP). I then calculated what percent of authors who published in each of 20 other journals since 2018 have also published a top-3 article at any point since 2000.

Here are the top 10 journals, according to this ranking (the above-mentioned stat is in the first column).


Quarterly Journal of Political Science and International Organization come out as the top 2. This is noteworthy because alternative lists of top political science journals suggested to me included these two journals! Political Analysis is a close second, followed by a group of 5 journals with very similar percentages overall (suggesting similar quality).

Below is the next set of ten. Since this is not my research area, I’m hoping you can tell me in the comments whether these rankings are reasonable or not! Happy publishing.

Finally, here’s an excel version of the full table, in case you want to re-sort by another column. Note that if a journal is not listed, that means I did not rank it. Feel free to ask about other journals in the comments.

Machine learning in economics

Machine learning seems to be everywhere in economics these days. I wondered – has these been a gradual trend or is this a sudden explosion? So I again turned to Academic Sequitur data. This time, I decided to stick to NBER working papers as my data source, largely because they lead journal publications by a few years. I looked for the following terms in the abstract or title: “machine learning”, “lasso”, “neural net”, “deep learning”, and “random forest”. The graph below shows the percent and number of NBER working papers that meet these criteria over time (on the left and right y-axis, respectively).

An explosion indeed! Virtually no paper abstract/titles mention anything machine-learning related in the abstract in 2000-2014. Then we have a respectable five papers in 2015, one paper in 2016, followed by 15 papers in 2017, 22 papers in 2018, and five papers so far this year. As a percentage of total papers, the machine learning papers are small, however, making up at most 1.5% of total papers. Whether the numbers stagnate or keep skyrocketing remains to be seen!

And in case you’re wondering, the prize for the first NBER working paper to utilize machine learning goes to…”Demand Estimation with Machine Learning and Model Combination” by Patrick Bajari, Denis Nekipelov, Stephen Ryan, and Miaoyu Yang, issued in February of 2015.

Update: here’s how the graph would look if you also counted “big data” as indicating machine learning. Prize for first NBER paper to mention “big data” goes to “The Data Revolution and Economic Analysis” by Liran Einav and Jonathan Levin, issued in May 2013.

Forthcoming (if this post is popular): published papers utilizing machine learning!

A new way of ranking journals 2.0 – journal connectedness

A few weeks ago, I proposed that one could rank journals based on what percent of a journal’s authors have also published in a top journal. I calculated this statistic for economics and for finance, using the top 5/top 3 journals as a reference point.

Of course, one does not have to give top journals such an out-sized influence. One beauty of this statistic is that it can be calculated for any pair of journals. That is, we can ask, what percent of authors that publish in journal X have also published in journal Y? This “journal connectedness” measure can also be used to infer quality. If you think journal X is good and you want to know whether Y or Z is better, you can see which of these two journals has a higher percentage of authors from X publishing there. Of course, with the additional flexibility of this ranking come more caveats. First, this metric is most relevant for comparing journals from the same field or general-interest journals. If X and Y are development journals and Z is a theory journal, then this metric will not be very informative. Additionally, it’s helpful to be sure that both Y and Z are worse than X. Otherwise, a low percentage in Z may just reflect more competition.

With those caveats out of the way, I again used Academic Sequitur‘s database and calculated this connectedness measure for 52 economics journals, using all articles since 2010. Posting the full matrix as data would be overkill (here’s a csv if you’re interested though), so I made a heat map. The square colors reflect what percent of authors that published in journal X have also published in journal Y. I omitted observations where X=Y to maximize the relevance of the scale.

A few interesting patterns emerge. First, the overall percentages are generally low, mostly under 10 percent. The median value in the plot above is 3 percent and the average is 4.3 percent, but only 361 out of 2,652 squares are <0.5 percent. That means that a typical journal’s authors’ articles are dispersed across other journals rather than concentrated in some other journal. This makes sense if the typical journal is very disciplinary or if there are many equal-quality journals (eyeballing the raw matrix, it seems like a bit of both is going on, but I’ll let you explore that for yourself).

There are some notable exceptions. For example, 41% of those who have published in JAERE have published in JEEM, 54% of those who published in Theoretical Economics have published in JET, and 35% of those who have published in Quantitative Economics have published in the Journal of Econometrics. These relationships are highly asymmetric: only 13% of those who have published in JEEM have published in JAERE, only 16% of those who have published in JET have published in Theoretical Economics, and only 4% of those who have published in the Journal of Econometrics have published in Quantitative Economics.

There is also another important statistic contained in this map: horizontal lines with many green and light blue squares indicate journals that people seem to be systematically attracted to across the board. And then there’s that green cluster at the bottom left, with some yellows thrown in. Which journals are these?

I had the benefit of knowing what the data looked like before I made these heat maps, so I deliberately assigned ids 1-5 to the top 5 journals (the rest are in alphabetical order). So one pattern this exercise reveals is that authors from across the board are flocking to the top 5s (an alternative interpretation is that people with top 5s are dominating other journals’ publications). And people who publish in a top 5 tend to publish in other top 5s – that’s the bottom left corner. In fact, if you omitted the top 5s, as the next graph does, the picture would look a lot less colorful.

But even without the top 5, we see some prominent light blue/green horizontal lines, indicating “attractive” journals. The most line-like of these are: Journal of Public Economics, Journal of the European Economics Association, Review of Economics and Statistics, Economics Letters, and JEBO. Although JEBO was a bit surprising to me, overall it looks like this giant correlation matrix can be used to identify good general-interest journals. By contrast, the AEJs don’t show the same general attractiveness.

Finally, this matrix illustrates why Academic Sequitur is so useful. Most authors’ articles are published in more than just a few journals. Thus, to really follow someone’s work, one needs to either constantly check their webpage/Google Scholar profile, go to lots of conferences, or subscribe to many journals’ ToCs and filter them for relevant articles. Some of these strategies are perfectly feasible if one wants to follow just a few people. But most of us can think of way more people than that whose work we’re interested in. Personally, I follow 132 authors (here’s a list if you’re interested), and I’m sure I’ll be continuing to add to this list. Without an information aggregator, this would be a daunting task, but Academic Sequitur makes it easy. Self-promotion over!

If you think of anything else that can be gleaned from this matrix, please comment.

Journal Rankings: Extensions and Robustness Checks

My recent post on a new way of ranking journals using data from Academic Sequitur (which you should check out, by the way!) was more popular than I expected. People pointed out important theory and macro journals I had missed (I’m clearly an applied micro person). So I added more journals. They also pointed out that making top 5 the reference journals may mean that the ranking reflects who is in “the club” with these journals more than anything else. One thing I will do in the future is make a giant matrix of pairwise journal relationships, so if you don’t like using the top 5 as a reference, you can use a different journal. But for now what I did is calculate what % of authors in a journal have only one top 5. This could plausibly make the rating noisier (maybe these people just got lucky), but it should reduce the influence of those who live in the top 5 club (as opposed to guests!).

Finally, someone pointed out that because AER and AEJs are linked, using publication in AER as a metric for the quality of AEJs may be misleading. So I calculated the percent publishing in top 4, excluding the AER. This metric is what the data below are sorted by.

So without any further ado, I give you the expanded and revised rankings! First, the “top 10”.


One thing worth pointing out here is that Quantitative Economics is linked to Econometrica, as is also evident from the high proportion of its authors who have published there. Theoretical Economics and Journal of Economic Theory were not originally in the set of journals I ranked, but they score high both with and without counting the AER. Overall, the rankings get re-shuffled a bit, but given how numerically close the original percentages were, I would call this broadly similar.

Next ten journals:

Next ten:

And here’s the final set:

How do the rankings with and without AER compare? Four journals rise by 5+ spots when AER is excluded: Quantitative Economics, Journal of Mathematical Economics, Review of Economic Dynamics, and Quantitative Marketing and Economics. And four journals fall by 5+ spots: AEJ: Micro, Journal of Human Resources, Journal of International Economics, and Journal of the Association of Environmental and Resource Economics (abbreviated as JAERE above). AEJ: Policy falls by four spots, AEJ: Macro falls by one spot, and AEJ: Applied stays in the same rank.

What if we only count authors who have just one top 5? That changes the rankings much more, actually, with 13 journals rising 5+ spots, including ReStat, JHR, JIE, JUE, and JPubEc. Nine journals fall by 5+ spots, including AEJ: Applied, JEEA, RAND, JEL, and IER. To me, that suggests that who we count matters much more for the ranking than which journals we count.

Bottom line is: stay tuned (you can subscribe to be notified when new posts appear on the bottom right). I plan to play around with these rankings a lot more in the next few months to figure out if/how they can be useful! If you want to play around with the data yourself, the full spreadsheet is here (let me know what you find).

Ranking finance journals

Last week, I tried out a new way of “ranking” economics journals, based on the percent of 2018-2019 authors who have also published in one of the top 5 economics journals anytime since 2000. This week, I decided to take a look at finance journals (political science is next in line, as well as some extensions and robustness checks for econ journals).

The top 3 finance journals are generally agreed to be Journal of Finance, Journal of Financial Economics, and Review of Financial Studies. How do other finance journals stack up against them according to this metric? For fun and fairness, I threw in the top 5 econ journals into the mix, as well as Management Science.

Here are the “top 10” journals according to this metric (not counting the reference top 3, of course). The first numerical column gives the percent of authors that published in the journal specified in the row in 2018-2019 who have also published an article in any of the top 3 finance journals at some point since 2000. The next three columns give journal-specific percentages.

Because this is not my field, I have less to say about the reasonableness of this ranking, but perhaps finance readers can comment on whether this lines up with their perception of quality. Compared to the econ rankings, the raw percentage differences between journals appear larger, at least at the very top. And the overall frequency of publishing in the top 3 is lower. Management Science makes the top 5, but the top econ journals do not (
JPE and ReStud do make the top 10). To me, this makes sense, since it’s pretty clear that this ranking picks up connectedness as well as quality. Anecdotally, finance departments seem to value Management Science and the top 5 econ journals no more and perhaps less than the top 3 finance journals.

Here are the rest of the journals I ranked (as before, if a journal is not on the list, it doesn’t mean it’s ranked lower, it means I didn’t rank it). Here, we can clearly see that not many people who publish in JF, JFE, and RFS publish in AER, QJE, or Econometrica.

If there’s another journal you’d like to see ranked in reference to the top 3 finance ones, please comment!

How good will AER: Insights be?

American Economic Review: Insights is a new journal by the American Economic Association. It’s intended to replace the short paper section of the AER, and the first issue will appear this summer. Naturally, I’ve had quite a few discussion with colleagues about its likely quality: will AER: Insights be a top-tier journal like the AER, somewhere in the middle of the pack, or a flop?

Obviously, many factors affect the success of a journal. But how it starts out surely matters. Publish some amazing articles and become the journal at the top of people’s minds when they think about where to submit, which will in turn make it easier to attract high-quality articles. Publish some questionable research, and risk aversion will kick in, prompting people to submit elsewhere first and leaving you mostly with articles that other journals decided against publishing.

So I again dove into the database of Academic Sequitur. We track forthcoming articles, so even though the first issue of AER: Insights has not been published yet, we have 26 to-be-published articles in our database (the first of which were added in November of 2018, by the way!). The question I asked was simple: what percent of authors whose work is scheduled to appear in AER: Insights have published a top 5 article any time since 2000?

The answer is a whopping 67% (61% if you exclude AER articles). 58% have published in the AER, 23% have published in Econometrica, 38% have published in QJE, and 39% have published in ReStud. The overall percentage is non-trivially higher than that of any other journal except for Journal of Economic Literature.

Perhaps these numbers are not surprising to you. In fact, it may very well be a strategy that AER: Insights is consciously employing to gain early traction. And these statistics could signal that it’s difficult to get published there unless you’re well-known, at least at this point (though we don’t know what the distribution of submissions looks like). But more information is better, and this certainly makes me more likely to try for AER: Insights in the future!

A new way of ranking journals*

Journal rankings can be controversial. At the same time, the quality of a journal in which one’s research is published is generally thought to be very important for a researcher’s career, and many researchers are thus rightly concerned about it.

Here at Academic Sequitur, we came up with a new way (to the best of our knowledge) to think about how a journal is perceived by the profession. This blog post focuses on the field of economics. We start with the premise that the top 5 economic journals (AER, Econometrica, JPE, QJE, and ReStud) are really the best in the profession, on average. Then we calculate what percent of authors who published in another journal have at least one “top 5” publication. The higher than number is, the more likely it is to be a high-quality journal. In non-top-5 journals, we considered all articles published since 2018 (results are similar if we start in 2009, when the AEJs were started). For calculating whether an author has a top-5 publication, we used top 5 articles since the year 2000.

Before I show you the results, it’s important to note that one thing which can affect this ranking is the topic of the journal under consideration. If a journal’s focus is not “sexy” enough for a top 5 journal, that’s likely going to lower its ranking. Whether this is a feature or a bug I will let you decide.

So with that, here are journals that come out on top, based on the overall proportion of authors publishing at least one article in any top 5 (the rest of the columns show the journal-specific proportions).

Four interesting things about these rankings: First, the shares are really high for eight out of ten of these journals, with about half the authors having at least one top 5. To me, this pattern suggests that we definitely shouldn’t overlook the non-top-5-journals when looking for quality articles. Second, these eight journals are pretty close to each other according to this metric, suggesting that quality differences between them are not large. Third, this metric largely aligns with what I think are the general perceptions of applied microeconomists in North America, with one exception: we seem to be giving the Journal of the European Economic Association less credit than it deserves. Fourth, the rankings would definitely change if we used specific journals for comparison rather than the overall top-5 metric.

Here’s the next set of journals. Keep in mind that we didn’t perform these calculations for all journals in our database (this is just a blog post, after all). So if you don’t see your favorite journal, that doesn’t mean it’s ranked lower than these. It just means we didn’t calculate a ranking for it. But if you’d like, leave us a comment and we’ll tell you how your favorite journal ranks!

While this metric is unlikely to be perfect, it is also unlikely to be worse than citation impact measures. And its benefit for economics specifically is that it isn’t as affected by publication lags as a citation-based measure. What do you think?

How to be a productive researcher

We are taught a lot of research skills in grad school. But a lot of these are specific technical skills. Little attention is devoted to the question of how to be a productive researcher. This “meta-skill” is usually learned the hard way through trial and error or, in the best-case scenario, through others’ advice. Here are my two cents on what works.

  1. Treat the research process as a skill you have to learn and maintain. No one is born knowing how to take a project from an idea to a published paper; some people just figure it out more quickly than others. And the more you practice, the easier it gets. Having the right attitude about this process can help you calibrate expectations and muster willingness to persevere.
  2. Protect your research time. Figure out when you work best (e.g., mornings or evenings) and minimize other commitments during those times. In my calendar, 8am-11am is marked as “research time” every single weekday. That reminds me not to schedule other things during that time. To avoid having to respond to requests with “I’m sorry, that time doesn’t work for me, I’ll be sitting in my office doing research,” I will often take the first step in suggesting an afternoon meeting time. This doesn’t always work – for example, I taught 9:30-11am Tue/Thu last semester and some morning meetings are unavoidable – but it greatly improves my productivity overall. Remember, the fact that your plan to do research at a particular time does not involve another person does not mean that it is not a “real” commitment. In fact, your job (mostly) depends on it.
  3. Invest in your writing skills. Writing used to be difficult, and I would dread it. Nevertheless, I persevered and now writing is much easier and more enjoyable. Here are some specific suggestions.
    • Make an effort to write every day, during the time when your brain and body are at their best. For me, this is the morning.
    • Allow yourself to write “s&*^ty first drafts.” Do not try to spit out the perfect word/sentence/paragraph on the first try. Write freely, edit later.
    • Do not start out trying to write for hours at a time. If you are not used to writing regularly, aim for 30 minutes or even just 10 minutes. If you write for 10 minutes a day, that is almost an hour of writing per week. If you do 30 minutes a day, that adds up to 2.5 hours! The Pomodoro technique can be very helpful here.
    • Join a writing group. For about two years, I did Academic Writing Club, an online group where professors or grad students from related disciplines are joined by a “coach”, create weekly goals for themselves, and check in daily with their progress. It is not free, but in my opinion worth it (and you can probably use your research budget). If you are looking for a free writing group, look for people around your university who are willing to get together and write!
  4. Prioritize projects based on how close they are to publication. (Obviously your coauthors’ preferences and constraints matter here, so this is a general guideline). Specifically, this should be the order of your priorities, if not on a daily, then at least on a weekly level:
    • Proofs of accepted papers that need to be turned around to the publisher. [When I first heard this suggestion, my reaction was, “I don’t have any proofs!” If that is the case, don’t worry, you will get there.]
    • Revise-and-resubmits.
    • Working papers that are closest to submission, whether these are brand new ones or rejected ones looking for a new home.
    • Projects that are closest to becoming working papers (e.g., ones where the analysis is complete).
    • Projects where you are analyzing the data (working with a model, if you’re a theorist).
    • Projects that are newer than everything above.

5. Try to avoid being the bottleneck. If someone is waiting for you to do something on a project before she or he can work on it, try to prioritize that task. Obviously, one reason for this is that your coauthors may be annoyed if you take too long to do something you promised to do. But another (possibly more important) reason is that by not being the bottleneck, you can boost your annual productivity by having your coauthors (or research assistants) do their work faster.