Problems with academic publishing

I wrote almost one year ago why I thought Plan S wasn’t the right solution for moving towards more open access to scientific publications. I generally stand by my points made in that post, but since then there have been some changes in Plan S, along with a generally more broad application of certain rules in the spirit of Plan S. As I said then, this is definitely a good thing, but I still have some reservations. But I don’t want any reader to think that I am somehow defending the current state of affairs in academic publishing. In fact, as someone who currently is a participant both as an author and reviewer, I would say that it is in quite a bad state. Here I outline some of the problems.

(I am writing this as a physicist/engineer working primarily on optics, so some nuances may not apply to other fields. But based on anecdotes from others, many of the below problems are general.)

Problems for readers:

Many academic papers, of which the work was funded by public money, are not available to be read by the general public.

This was discussed at length in my Plan S article, why it is often bad, but not always as bad as people make it out to be. Besides Open Access (articles being freely available without subscription) not being the default, it reflects a larger problem of academic publishing becoming corporate rather than society-focused. More people are starting to use social media and blogs to disseminate their work more broadly, for free, and for a lay audience, and I think that’s great.

It is impossible to keep up with the increasing volume of academic and scientific articles.

This reverberates to other realms, but for scientists and readers of academic papers, it is just impossible to keep up. Gone are the days when a single volume would arrive in the mail and you could skim all of the papers that interested you (this was probably already gone in the 60s). It reflects a mostly good trend – we are advancing knowledge at a faster rate – but it also is a product of unnecessarily frequent publications (due to incentives on authors and privately-owned publishers to publish more and more).

Problems for authors:

There is an increasing pressure from institutions, either for hiring or promotions, to publish more and more papers.

Colleges and universities are more and more requiring or demanding that researchers publish more frequently. This takes away the power from the researcher to decide what is relevant to publish. It also may incentivize certain bad practices – salami slicing work to get more publications, adding authors who didn’t contribute (“prestige authors”), or at the most extreme paying to get authorship.

There is an increasing pressure from institutions, either for hiring or promotions, to keep track of metrics that can be considered as proxies for an author’s success in publishing.

The first and most well-known metric for an author is the h-index. This measures the maximum number (h) such that one has published (h) papers with a minimum of (h) citations. It is supposed to encapsulate both the number and quality of an author’s publications and therefore be a good metric by itself. Along with the h-index, total number of citations is often used for assessment. There is nothing wrong with either of these numbers, and I agree that they can probably tell you something about the quality of an author (compared to others in a similar field). However, the problem is with metrics like these being used for hiring and promotions. When an institution blindly uses numbers like these, it may miss the bigger picture – amount and quality of teaching and supervision, service to the greater academic community, outreach and communication, diversity and inclusion, etc, etc. It also may incentivize the same bad practices as above, in addition to practices like unnecessary self-citation.

But beyond these metrics, some institutions go even further to push for publications in high impact-factor journals. The impact-factor (IF) is a metric that measures how often an average paper is cited in a given journal, and a publication in a high IF journal is considered fancier, more important, and more visible. The first problem is that using this for measuring an author’s success is just completely wrong – the IF measures the quality of a journal and is useful for a publisher to brag about their journal, but is not a measure of the quality of a given paper! And secondly, it also creates perverse incentives to hype up results and write papers in a flashy way to get into better journals, which actually may make the paper less readable.

Publishing is easier for already successful authors

The system is not perfect, and editors and reviewers will never be able to assess a paper on only its scientific merit. There will always be biases, and due to such a high volume of papers and the fact that reviewers and many editors work for free, there is limited time and motivation. Therefore, if the editor gets a paper on their desk written by a famous author, it’s more likely to be sent on to reviewers and reviewers are more likely to forgive small mistakes, give benefit of the doubt, be afraid of challenging things too much, and probably also spend less time time on their review. This means that it is much easier to publish if you are an established author (beyond the fact that they probably also have more resources), and that the barrier for entry is high. This is even more true for minorities, women, researchers from lesser-known institutions or countries with less-established research in a given field, or any researcher with a smaller support system.

Problems for reviewers:

It is a thankless job to review papers.

Following up on previous points, the reviewers who assess whether a paper is suitable for publication in a certain journal have a thankless job. This is almost never paid, often requires navigating websites for different journals that all have a different procedures and formats, and usually is time-sensitive and anonymous. It is made even worse by the fact that this work, done for free, is done in the framework of the increasingly corporate publishing world. The only saving grace, and the reason why I still routinely review papers, is because I need them for my own papers and I think I do a good job. If I want 3 reviewers for each of my papers, then it means I should review at least three times as many papers as I submit – oof. There are improvements on the way or already implemented at many publishers, but I don’t expect it to be paid work any time soon, so the reality is we should keep doing it for the sake of the system.

Problems for journal editors:

It is hard to ensure the proper functioning of the peer-review process.

Lastly, even editors have a hard time. There are so many papers coming across their desks that it is hard to truly assess them, and it is harder and harder to get enough reviewers (not to mention good reviewers). Only at the highest level of editorship at private publishing houses are these positions paid, and even there they have these same problems. I’m not asking for sympathy for these people in power, but we do rely on them to keep the peer-review process working so that we know what we’re reading is valid, and we know where to go looking for high quality work.

This certainly isn’t an exhaustive list, so please comment below if you have something to add.

This entry was posted in Uncategorized. Bookmark the permalink.

One Response to Problems with academic publishing

  1. Pingback: Authorship in academic publishing | Ponderomotive Blog

Leave a Reply

Your email address will not be published. Required fields are marked *