Last week, I tried out a new way of “ranking” economics journals, based on the percent of 2018-2019 authors who have also published in one of the top 5 economics journals anytime since 2000. This week, I decided to take a look at finance journals (political science is next in line, as well as some extensions and robustness checks for econ journals).
The top 3 finance journals are generally agreed to be Journal of Finance, Journal of Financial Economics, and Review of Financial Studies. How do other finance journals stack up against them according to this metric? For fun and fairness, I threw in the top 5 econ journals into the mix, as well as Management Science.
Here are the “top 10” journals according to this metric (not counting the reference top 3, of course). The first numerical column gives the percent of authors that published in the journal specified in the row in 2018-2019 who have also published an article in any of the top 3 finance journals at some point since 2000. The next three columns give journal-specific percentages.
Because this is not my field, I have less to say about the reasonableness of this ranking, but perhaps finance readers can comment on whether this lines up with their perception of quality. Compared to the econ rankings, the raw percentage differences between journals appear larger, at least at the very top. And the overall frequency of publishing in the top 3 is lower. Management Science makes the top 5, but the top econ journals do not ( JPE and ReStud do make the top 10). To me, this makes sense, since it’s pretty clear that this ranking picks up connectedness as well as quality. Anecdotally, finance departments seem to value Management Science and the top 5 econ journals no more and perhaps less than the top 3 finance journals.
Here are the rest of the journals I ranked (as before, if a journal is not on the list, it doesn’t mean it’s ranked lower, it means I didn’t rank it). Here, we can clearly see that not many people who publish in JF, JFE, and RFS publish in AER, QJE, or Econometrica.
If there’s another journal you’d like to see ranked in reference to the top 3 finance ones, please comment!
American Economic Review: Insights is a new journal by the American Economic Association. It’s intended to replace the short paper section of the AER, and the first issue will appear this summer. Naturally, I’ve had quite a few discussion with colleagues about its likely quality: will AER: Insights be a top-tier journal like the AER, somewhere in the middle of the pack, or a flop?
Obviously, many factors affect the success of a journal. But how it starts out surely matters. Publish some amazing articles and become the journal at the top of people’s minds when they think about where to submit, which will in turn make it easier to attract high-quality articles. Publish some questionable research, and risk aversion will kick in, prompting people to submit elsewhere first and leaving you mostly with articles that other journals decided against publishing.
So I again dove into the database of Academic Sequitur. We track forthcoming articles, so even though the first issue of AER: Insights has not been published yet, we have 26 to-be-published articles in our database (the first of which were added in November of 2018, by the way!). The question I asked was simple: what percent of authors whose work is scheduled to appear in AER: Insights have published a top 5 article any time since 2000?
The answer is a whopping 67% (61% if you exclude AER articles). 58% have published in the AER, 23% have published in Econometrica, 38% have published in QJE, and 39% have published in ReStud. The overall percentage is non-trivially higher than that of any other journal except for Journal of Economic Literature.
Perhaps these numbers are not surprising to you. In fact, it may very well be a strategy that AER: Insights is consciously employing to gain early traction. And these statistics could signal that it’s difficult to get published there unless you’re well-known, at least at this point (though we don’t know what the distribution of submissions looks like). But more information is better, and this certainly makes me more likely to try for AER: Insights in the future!
Journal rankings can be controversial. At the same time, the quality of a journal in which one’s research is published is generally thought to be very important for a researcher’s career, and many researchers are thus rightly concerned about it. Here at Academic Sequitur, we came up with a new way (to the best of our knowledge) to think about how a journal is perceived by the profession. This blog post focuses on the field of economics. We start with the premise that the top 5 economic journals (AER, Econometrica, JPE, QJE, and ReStud) are really the best in the profession, on average. Then we calculate what percent of authors who published in another journal have at least one “top 5” publication. The higher than number is, the more likely it is to be a high-quality journal. In non-top-5 journals, we considered all articles published since 2018 (results are similar if we start in 2009, when the AEJs were started). For calculating whether an author has a top-5 publication, we used top 5 articles since the year 2000.
Before I show you the results, it’s important to note that one thing which can affect this ranking is the topic of the journal under consideration. If a journal’s focus is not “sexy” enough for a top 5 journal, that’s likely going to lower its ranking. Whether this is a feature or a bug I will let you decide.
So with that, here are journals that come out on top, based on the overall proportion of authors publishing at least one article in any top 5 (the rest of the columns show the journal-specific proportions).
Four interesting things about these rankings: First, the shares are really high for eight out of ten of these journals, with about half the authors having at least one top 5. To me, this pattern suggests that we definitely shouldn’t overlook the non-top-5-journals when looking for quality articles. Second, these eight journals are pretty close to each other according to this metric, suggesting that quality differences between them are not large. Third, this metric largely aligns with what I think are the general perceptions of applied microeconomists in North America, with one exception: we seem to be giving the Journal of the European Economic Association less credit than it deserves. Fourth, the rankings would definitely change if we used specific journals for comparison rather than the overall top-5 metric.
Here’s the next set of journals. Keep in mind that we didn’t perform these calculations for all journals in our database (this is just a blog post, after all). So if you don’t see your favorite journal, that doesn’t mean it’s ranked lower than these. It just means we didn’t calculate a ranking for it. But if you’d like, leave us a comment and we’ll tell you how your favorite journal ranks!
While this metric is unlikely to be perfect, it is also unlikely to be worse than citation impact measures. And its benefit for economics specifically is that it isn’t as affected by publication lags as a citation-based measure. What do you think?
We are taught a lot of research skills in grad school. But a
lot of these are specific technical skills. Little attention is devoted to the
question of how to be a productive
researcher. This “meta-skill” is usually learned the hard way through trial
and error or, in the best-case scenario, through others’ advice. Here are my
two cents on what works.
Treat the research process as a skill you have to learn and maintain. No one is born knowing how to take a project from an idea to a published paper; some people just figure it out more quickly than others. And the more you practice, the easier it gets. Having the right attitude about this process can help you calibrate expectations and muster willingness to persevere.
Protect your research time. Figure out when you work best (e.g., mornings or evenings) and minimize other commitments during those times. In my calendar, 8am-11am is marked as “research time” every single weekday. That reminds me not to schedule other things during that time. To avoid having to respond to requests with “I’m sorry, that time doesn’t work for me, I’ll be sitting in my office doing research,” I will often take the first step in suggesting an afternoon meeting time. This doesn’t always work – for example, I taught 9:30-11am Tue/Thu last semester and some morning meetings are unavoidable – but it greatly improves my productivity overall. Remember, the fact that your plan to do research at a particular time does not involve another person does not mean that it is not a “real” commitment. In fact, your job (mostly) depends on it.
Invest in your writing skills. Writing used to be difficult, and I would dread it. Nevertheless, I persevered and now writing is much easier and more enjoyable. Here are some specific suggestions.
Make an effort to write every day, during the time when your brain and body are at their best. For me, this is the morning.
Allow yourself to write “s&*^ty first drafts.” Do not try to spit out the perfect word/sentence/paragraph on the first try. Write freely, edit later.
Do not start out trying to write for hours at a time. If you are not used to writing regularly, aim for 30 minutes or even just 10 minutes. If you write for 10 minutes a day, that is almost an hour of writing per week. If you do 30 minutes a day, that adds up to 2.5 hours! The Pomodoro technique can be very helpful here.
Join a writing group. For about two years, I did Academic Writing Club, an online group where professors or grad students from related disciplines are joined by a “coach”, create weekly goals for themselves, and check in daily with their progress. It is not free, but in my opinion worth it (and you can probably use your research budget). If you are looking for a free writing group, look for people around your university who are willing to get together and write!
Prioritize projects based on how close they are to publication. (Obviously your coauthors’ preferences and constraints matter here, so this is a general guideline). Specifically, this should be the order of your priorities, if not on a daily, then at least on a weekly level:
Proofs of accepted papers that need to be turned around to the publisher. [When I first heard this suggestion, my reaction was, “I don’t have any proofs!” If that is the case, don’t worry, you will get there.]
Revise-and-resubmits.
Working papers that are closest to submission, whether these are brand new ones or rejected ones looking for a new home.
Projects that are closest to becoming working papers (e.g., ones where the analysis is complete).
Projects where you are analyzing the data (working with a model, if you’re a theorist).
Projects that are newer than everything above.
5. Try to avoid being the bottleneck. If someone is waiting for you to do something on a project before she or he can work on it, try to prioritize that task. Obviously, one reason for this is that your coauthors may be annoyed if you take too long to do something you promised to do. But another (possibly more important) reason is that by not being the bottleneck, you can boost your annual productivity by having your coauthors (or research assistants) do their work faster.
Following the publication of the post on where to submit your paper, someone asked, “How do you know when it’s time to give up on a paper?”
This is a really hard question. We put a lot of work into
our papers (I’m assuming in this post that it is a completed paper) and,
despite the theoretical wisdom of “Ignore sunk costs”, it’s difficult to let go
of months or years of hard work no matter how bleak things look. But there’s
also no magic number of rejections beyond which it’s clear that you should just
give up. Here are my two cents on how to make the decision.
First, here’s a clever trick I use to make “giving up” on a paper easier psychologically – I have never permanently given up on a paper. But I do have four papers and a lot of never-made-it-to-paper-stage-projects “on the back burner”. I haven’t worked on them for years and don’t plan on doing so unless I have nothing better to do. In other words, instead of asking the hard question of “Should I never try to publish this paper again?”, ask the easier question of “Should I prioritize other projects over this paper for now?” I always have the option to pull papers out of the “back burner” folder, but lo and behold, I keep having better projects to work on and don’t think much about the archived ones.
Of course, that still leaves the question of “Should I prioritize other projects over this paper?” open. I’ll discuss three related cases where this question becomes relevant and offer some general guidance for how to decide.
#1 Your paper has gotten rejected multiple (let’s say at least five) times for roughly the same reason, you don’t think you can do anything to address that shortcoming, and you have other, more promising, projects/ideas. If that reason is “this paper isn’t making enough of a contribution” AND you’ve revised your introduction substantially in between submissions to make the best possible case for your contribution, this may be a sign that it’s time to drop down a tier (though see some discussion below on when this is a good idea). At the same time, the contribution of a paper is hugely subjective. If the only thing reviewers find wrong with your paper is the contribution, then trying another journal within the same tier is fairly low-cost, assuming your contribution is actually within the realm of what gets published by the tier of journals you’ve been submitting to. Here, talking senior colleagues is especially helpful.
If the reason your paper keeps getting rejected is something related to the paper’s data/methodology – for example, no one believes your instrument, no matter how many robustness or placebo tests you’ve added – then dropping down a tier is also an option, but is less likely to be a successful strategy. I came close to giving up on a paper because no one seemed to like the IV. I ultimately decided to keep trying though because (a) a lot of the rejections were desk rejections, allowing me to re-submit without revising (since there was no real feedback given) and (b) I believed in the instrument myself and thought we made a good case for it. After six rejections, the paper was published.
By contrast, if your paper is getting rejected for diverse reasons, it is probably good to keep trying (though in that case I would recommend taking a close look at the writing to make sure your exposition is clear).
#2 You feel that your paper would only be publishable if you dropped to a tier of journals where your current colleagues generally don’t publish, you have other, more promising, projects/ideas. (Presumably, you think you need to drop down a tier because of numerous rejections. Otherwise, perhaps you are underestimating your paper!) For better or worse, publishing in a journal that your department really looks down on is sometimes viewed as a negative. So, if you otherwise have a good chance of getting tenure at your department (and want to get tenure at your department), you may want to put the project down and move on to something else. Two of my archived papers were archived for this reason.
#3 It looks like the path to publication in an acceptable-tier journal would be painful and you have other, more promising, projects/ideas. Maybe your case is not as extreme as the two cases above: you’ve had 3-4 rejections, you feel like you may have a shot at an acceptable but not stellar journal tier but, given the feedback you’ve gotten so far, you have a gut feeling that it would be painful for various reasons. Maybe a ref said the paper is not well-written and after taking a close look, you realize that the ref is right and that the whole paper needs an overhaul (I speak from experience). Maybe you have your own misgivings about the methodology/data and feel like an overhaul there is warranted. If you have other great projects in the pipeline with a lower cost-benefit ratio, by all means feel free to prioritize them. No one said you have to publish every paper you write.
Yes, I put “you have other, more promising, projects/ideas” in every entry on purpose. If you don’t have any other projects or ideas that have a reasonable shot at publishing at the same tier or higher than what you’ve been submitting to, then keep working on publishing the paper, even if it means a major overhaul. Use the suggestions I wrote about in a previous post on what to do after a rejection. While you wait for reviews, work on new projects and ideas and if a better one comes along and your submission gets rejected, by all means abandon the project.
A final word of caution is in order. According to my scientifically constructed chart below, our level of excitement about a project is always highest at the idea stage, when the promise seems unlimited and the pitfalls and barriers to getting there are not salient. So, if you find yourself constantly putting completed papers on the back burner and picking up new shiny ideas, stop! Go back to your best completed paper and publish it (and work on the shiny new ideas while you wait for reviews). Then repeat until you have a few publications.
Benjamin Franklin wrote, “in this world nothing can be said to be certain, except death and taxes” (though the earliest origin of that idea dates to Christopher Bullock in 1716, apparently). Most academics would agree that paper rejections also belong on that list. My 10 published papers have been rejected a total of 29 times. I also have two “archived” papers that were collectively rejected eight times before I gave up on them and four working papers that so far have been rejected seven times (two are now revise-and-resubmit, so the rate of rejection is decreasing). So I have a total of 44 rejections. I have ZERO papers that got a revise-and-resubmit at the first journal I submitted them to (= each of my papers has been rejected at least once). I’m not even counting conference and grant rejections.
Paper rejections come in many shapes and sizes: your
run-of-the-mill “Nice paper, but not enough of a contribution for this journal”
or “Too many little things wrong” rejections; a reviewer finding something
genuinely wrong with your manuscript; boilerplate desk rejections; a half-page
report from a lazy reviewer who clearly hasn’t read your paper; and the
frustrating “I just don’t believe your results” rejection. Rejections don’t
feel good, but given that they are inevitable, it’s important to learn how to
deal with them and move past them as quickly as possible. Below, I provide some
suggestions that have worked for me.
First, allow yourself to take a few days to “mourn” the decision.
A few days of inaction after a rejection won’t make much of a difference. I
typically don’t even read the referee reports closely until it’s been a few
days because I’m not confident in my ability to take in the feedback objectively.
By all means, trash-talk the referees to your colleagues (people at your
institution almost surely won’t be asked to review your papers), join the
“Reviewer 2 must be stopped” group on Facebook (especially if you don’t know
what “Reviewer 2” refers to), have a drink or two (please drink responsibly),
do some exercise, work on another paper, or binge-watch that show you’ve been
waiting to see. Do be careful how you discuss your reports online or at
conferences because you never know who your reviewers were or who might know who your reviewers were.
It is hard not to take rejections personally, but in the
vast majority of cases, they are not. The reviewers rejected your paper, they
did not reject you as a person or a researcher. Even the comments about your
paper may not have anything to do with the quality of your paper. Some
reviewers might strongly dislike a particular methodology or research area,
others may have had a bad day or week, and some may operate in toxic
environments where unnecessary harshness is disguised as “honesty”. Your
reviewer may have been a graduate student doing a referee report for the first
time or a senior professor drowning in service work. Almost everyone has a “Reviewer
2” story, including some of the best researchers, and you are not alone. If a
reviewer seemed particularly unfair, talk to a senior colleague about appealing
the decision. However, appeals are definitely not the standard way to deal with
rejections.
Next comes the time for actual work. Unless the journal
rejecting your paper was your last stop before you were going to abandon
efforts to publish it, try to return to the reports within a week of the
rejection and look at them objectively. It can be tempting to either (1) ignore
the reports completely and send the paper back out as soon as possible or (2) treat
the reports as a revise-and-resubmit and try to address all the reviewer’s
comments. Neither approach is generally a good idea, for two reasons.
First, you may get the same reviewer again. In some fields,
reviewing the same paper twice is not acceptable, so you may get a different
draw in that case. But in economics and surely some other fields, it’s not
uncommon for the same person to review the paper two or more times at different
journals. In such a case, the best you can hope for if you didn’t change
anything in your paper is that the reviewer will return the same report to the
editor. But it’s also possible that the reviewer will be annoyed that you did
not take into account any of the comments they worked hard to give you and
treat your paper more harshly than the first time around. In short, you want to
avoid giving the impression that you thought the comments so worthless that you
did not address even one.
Second, even if you’re 100% sure you’re not going to get the
same reviewer, it’s highly unlikely that the reviewers’ comments were
completely idiosyncratic or idiotic. If you ignore a comment that you could
have addressed and a subsequent reviewer has the same concern, your paper could
end up rejected again for avoidable reasons. Despite all the “Reviewer 2”
stories out there, I think the overall peer review process is far from
completely broken, so it’s also very unlikely that all the comments are useless
and wrong. In short, the best way to treat the reviewer reports following a
rejection is as an opportunity to make your paper better.
When deciding whether to address a particular comment, I ask
myself two things: (1) How likely is this comment to come up again? and (2) How
easy is this for me to address? The higher the comment is on this
two-dimensional likelihood-ease scale, the more you should jump at the chance
to address it. Whether something is likely to come up again or not is the
hardest question to answer. Here, thinking about comments you’ve gotten at
conferences or asking colleagues for their feedback on a particular comment can
be really helpful. Rigorous self-honesty helps too: with some introspection,
most of us will be able to identify comments where the reviewer really does
have a point. Once you’ve identified all the relevant comments, start
addressing them one by one. Where to stop can be difficult to tell, but if you
start with the comments that rank high on ease and/or likelihood, you can stop
at any point with the knowledge that you’ve addressed the most important ones. For
me, a good rule of thumb is that the paper should be ready to go back out
within 1-3 months or less of not-full-time work (this is probably equivalent to
about 1-3 weeks of full-time for me). Anything more than that is likely to be
excessive in most circumstances.
I’ll wrap up with two specific suggestions. If a reviewer
comment makes it seem like she or he misunderstood something about what you’re
doing, try to see if you can make that part of the paper clearer. You have the
privilege of knowing your paper better than anyone else, so what seems clear to
you may not be to the average reader. If there is a comment that seems likely
to come up again but would be really difficult to address, you have a few
options. You can add a brief explanation as to why doing X would be difficult, possibly
as a footnote, possibly as a suggested avenue for future research. This signals
to reviewers that you are aware of X. Relatedly, you can hint that you could do
X but it’s outside of the scope of the current paper. That allows a persistent
reviewer to insist on seeing X in a revision but reduces the likelihood that
they reject the paper because you didn’t already do X.
In the end, these steps don’t necessarily make rejections
more pleasant, but they do move your paper closer to published!
There are many high-quality
journals out there and choosing which one to submit your paper to can be a
daunting task. Below, I offer some suggestions.
A great starting point is your manuscript’s
reference section. Identify the papers most closely related to yours and tabulate
the journals that published them. Think of a few more distantly related papers
that may not have made it to your references and add the journals they are
published in to your list. If your list is short or, by contrast, there are too
many options, look at your advisors’ CVs for guidance. If you are still not
happy, look at CVs of colleagues working in related fields.
Once you have identified at least
five potential journals, go to each journal’s website and ask yourself: how often does this journal publish work
similar to mine (in terms of subject, research methods, etc) relative to other
journals on my list? The less frequently a journal publishes papers in your
research area, the lower your chances of acceptance. If the journal rarely publishes
related papers, you may want to try somewhere else first.
Are there exceptions to this
rule? Yes. If the journal has a new editor who works in your area, your chances
are probably higher than historical publication information may indicate.
Browsing the editorial board of a journal can thus help you assess your
publication chances as well. The best sign is if your paper cites the work of
at least one of the editors in a positive light. Not only does this mean they
are more likely to handle your paper, but they are also likely to view it more
favorably than someone outside the field. (Unless you’ve tried to disguise a
bad paper as a good paper. But you wouldn’t do that, of course.)
Next, you want to ask yourself,
how quickly does each journal process the average submission and how much time
do you have? If you’re approaching a milestone like tenure or if there are many
other people working on the same topic and you’re worried about being scooped, you
may want to prioritize journals that turn papers around quickly. Sometimes
journals publish these statistics (e.g., number of days to first decision).
Other times, you may have to ask colleagues about their experiences with
particular journals.
Another obvious consideration is the ranking/visibility of
the journals on your list and your own goals for the paper. If you are trying
to get tenure, prioritize the journals that are more valued by departments
where you could plausibly get tenure (my suggestion is to not put all your eggs
in one basket and ignore the idiosyncratic preferences of your current
department unless you have a really really good reason for doing so). If you’re
trying to maximize the impact of your work, consider which journals are most
respected by people in your field. These are often correlated with general
journal rank, but there may be some divergence.
Finally, think about how much rejection you want to take. On
average, the more competitive the journal you have chosen, the longer it will
take to publish the paper and the more likely you are to receive negative
feedback. I’m personally of the opinion that the rejection process can be used
productively and would thus recommend toughing it out if you have time, but this
approach may not be right for everyone. For better or worse, rejections are
inevitable for all of us, and learning how to deal with them is part of the academic
career. More on that in another post!
In a fascinating series of graphs, David Card and Stefano DellaVigna document that submissions to the top 5 economics journals have gone up from about 3,000 per year in the early 1990s to over 7,000 per year in the mid-2010s, while acceptance rates have decreased from 10-15% to 2.4-5.6%. A natural interpretation of these trends is that it has gotten much harder to publish in top journals, all else equal. But could there be more to this story? There are (at least) three reasons things are not as clear as they seem at first glance.
First, submitting a paper to a journal has certainly become easier since the 1990s. It used to be (or so I’m told) that you had to physically print out the paper, put it into an envelope, send it somewhere and wait for a return envelope, which meant that even desk rejections took a lot longer than they do now, on average. The whole process should have been inherently harder, which meant people may have thought twice before trying for a Hail Mary at a top journal. With more and more people having internet access and virtually all top journals accepting electronic submissions, I would predict that a lot more people are willing to shop their papers around for a while in hopes of hitting the best journal possible.
Second, the emphasis on having a top-5 publication seems to
have increased steadily over the past 20 years or so. Of course, there could be
reverse causality here – if publishing a paper in the top 5 is harder, then
success should be valued more, on average. But I think it’s likely that the
emphasis on the top 5 journals has caused more people to try to publish papers
there that they would otherwise send someone else, driving submissions up and
acceptance rates down. For example, authors may now try for three of the five
journals instead of just one (I may or may not have done this myself), so a
single paper will show up as three submissions in the statistics above.
Third, the graphs show that the number of authors per paper has gone up too, so conditional on the total number of published papers being constant, the total number of people with a top 5 publication should be going up!
But shouldn’t the first two trends mean it’s gotten harder to publish in top journals, as the original statistics imply? This is true only if the marginal papers are about as good as the average papers. And that is very unlikely to be the case. Given the high rates of desk rejections at many journals, is seems more likely that there are many people trying for a very unlikely outcome. In fact, it’s possible that the probability of a good paper getting into a top journal is essentially unchanged because each marginal paper has an incredibly tiny probability of acceptance. Of course, it’s hard to believe that the publishing process is that accurate, so it’s likely to have gotten at least somewhat more competitive at the top journals. But I highly doubt that the level of competition is anywhere close to 2-5 times higher, as the statistics imply.
The real question we should be asking is, are there more high-quality papers per high-quality journal (somehow controlling for general progress in methods, computing power, and data)? Or, if we’re interested in the likelihood of a person having a top-5 publication, are there more high-quality academics per high-quality journal? Both of these questions are really difficult to answer, but I found no statistics suggesting that there have been significant increases in the number of academic economists over this time period. For example, this website from 2006 says that “the number of new PhDs who intend to pursue careers in the US has declined.”. This report indicates that there were 24,886 AEA members and subscribers in 1999 and this one says there were 23,170 in 2017. My overall conclusion is that the decline in acceptance rates should not be interpreted as really bad news for good papers trying to get into good journals.
Generally, Academic Sequitur
finds papers as soon as they are posted online. Increasingly, journals are
posting papers as soon as they are accepted and correctly formatted (some even
before then!), which means that when the “official” new issue is announced, the
papers it consists of could have been hanging out on the web for months without
being publicized. This month, I checked just how large this lag can be by
seeing when Academic Sequitur found papers that were included in journals’ most
recent issues.
I checked the top 5 economics
journals: American Economic Review, Quarterly Journal of Economics, Journal of Political Economy, Econometrica, and Review of Economic Studies. For the January 2019 issue of AER, the included articles were all
added to our database between July 10 and August 27, 2018 (corresponding to the
dates they were posted). For the February 2019 issue of QJE, we found all the articles between August 20 and October 25,
2018. For the December 2018 issue of Econometrica,
articles were found between June 4and December 12, 2018. For the
December issue of the Journal of
Political Economy, we located all articles between August 2 and November
21, 2018. Finally, for Review of Economic
Studies, articles from the January 2019 issue were located between January
28, 2018 (yes, almost a year early!) and November 28, 2018.
Our users find out about articles
at the time they are posted, not when they get grouped into an issue after
languishing online for months. And I think that’s a huge plus!
Mitch Kapor wrote “Getting information off the Internet is like taking a drink from a fire hydrant.” My own experience with this when it comes to staying up to date on research couldn’t be more accurate.
I started as an Assistant Professor at the University of
Illinois in 2011. Somewhere around 2015, I decided I should be more systematic
about keeping up to date with recent literature in my field. Up until then, I
relied on conferences, a few mailing lists, and colleagues forwarding me papers
to learn about what was happening in environmental economics and in the
profession as a whole. But I still felt like I was missing some important
papers (indeed, I would periodically learn of a relevant paper that was published
several years ago).
My initial solution to this problem was to spend a few hours
signing up for various journals’ tables of contents. I figured I should track
some of the top general-interest economics journals as well as the top journals
in my field. So I signed up for: Quarterly
Journal of Economics, American
Economic Review, Journal of Political
Economy, Review of Economic Studies,
American Economic Journal: Applied
Microeconomics, American Economic
Journal: Economic Policy, Journal of
the European Economic Association, Journal
of Environmental Economics and Management, Journal of the Association of Environmental and Resource Economists,
Journal of Labor Economics, Journal of Risk and Uncertainty, NBER Weekly Paper Digest, and SSRN’s
“environmental economics” list, which notified me about new pre-prints
classified as environmental economics. I knew there might be some other papers
I would miss, but I was confident that I would see most of them.
What happened next was equally disappointing to hearing
about too few papers: I was overwhelmed by the number of irrelevant papers that
came my way. The digests from various journals would arrive throughout the
month and then sit there waiting for me to find the time to sort through them.
Often, I would spend half an hour or more just skimming the titles and
abstracts to find papers I was interested in. My brain hurt. Sometimes I would
just delete the digest, overwhelmed by the task.
At some point in 2016, I decided to stop being a passive sufferer and do something about this problem, not just for myself but for others. Two years later, Academic Sequitur was born. I love our solution because it is very straightforward and transparent. Our users specify authors, journals, and/or keywords that they want to follow, and we then notify them of newly published relevant papers that meet their criteria in a daily or weekly digest.
One awesome aspect of this solution is that one can vastly
expand the set of journals from which personalized updates are pulled without
worrying too much about information overload (you can of course still create
information overload by selecting too many journals). We don’t do any fancy
machine learning, and I think that’s a feature not a bug: people can be sure
that papers meeting their criteria will be delivered to their inbox and they won’t
miss anything they care about.
With the amount of information out there, you would think it would be easy to stay informed. But we are not supercomputers who can process terabytes of information and distill it to what’s important ourselves. Science cannot progress as quickly if the right information is not being delivered to and absorbed by the right people. Both researchers and society need better tools for disseminating research results, and I’m proud that Academic Sequitur is a part of that.