6 minutes reading time (1283 words)

F1000Research admits their "objective authorship criteria" "disadvantage young researchers"

F1000Research admits their "objective authorship criteria" "disadvantage young researchers"
F1000Research, an open access publisher operating an innovative model of post-publication peer review, was yesterday embroiled in controversy as it emerged that their criteria for accepting manuscripts for submission are based partly on the status of the author or their research institution, rather than simply upon the quality of the science itself.
Chealsye Bowley, OA advocate and scholarly communications librarian, revealed on Twitter that she had had a paper rejected by F1000Research. Apparently there had been through no assessment by either editor or reviewer. Chealsye herself had failed F1000’s "objective authorship criteria" – namely that authors must have either a PhD or MD and be formally affiliated with a recognised research institution. As Chealsye has no doctorate (although she does have two masters degrees) and does not work in a research department, but rather as a scholarly communications librarian, she failed this ad hominem test and was not judged to have the sufficient level of expertise to qualify her to enter the peer-review process.

F1000’s logic, as revealed in a series of tweets, is that: “We need line somewhere or literally anyone can publish. We try to make that line as objective as pos[sible]”:

https://twitter.com/F1000Research/status/755075782863716352

F1000 continued: “we have no editors & don't judge on paper quality, instead we have basic author criteria to provide an objective publishing framework (http://f1000research.com/about/policies#aaa …) all authors who meet criteria can publish with us. Researchers in scholarly publishing don't need a PhD if they have a clear publication record field” (Tweets 1, 2, 3)

[caption id="attachment_1127" align="alignleft" width="300"] CC BY 2.0 The Chicken & The Egg Dilemma by The Wanderer's Eye


The chicken/egg illogic of this policy is evidently crystal clear even to F1000 themselves, who had already conceded
it in their decision email to Chealsye: “We appreciate  ... that our criteria disadvantage young researchers who wish to publish on their own”. Not to worry, though – they had an answer: publish with someone more senior. The problem was that although joint-authoring with supervisors is standard practice in STEM subjects, it is not in other areas. Chealsye’s work was hers alone – it made no sense (and would have been borderline unethical) for her to attach her supervisor’s name (and credentials) to the paper just to pass this arbitrary criterion.

I have to admit to being shocked and disappointed that this arbitrary and elitist policy belongs to F1000, normally such a progressive force in scholarly communications. For any progressive publisher to endorse a policy that they themselves admit “disadvantage[s] young researchers” is bizarre. For them to encourage a young author to unnecessarily bring in a co-author as a work-around to the illogic of their capricious policy is borderline disreputable.
Such a policy reinscribes old prejudices about exclusion/inclusion and the cult of the expert that (I’d thought) open science was meant to be addressing. I hope (but am maybe way off-base) that the idea of not being allowed to enter the science conversation without first brandishing your PhD like a license-to-Sci is absurdly out-dated (but anyway, this should be especially true in an applied area like scholarly communications). Simply having a PhD is like having a marathon medal – well done, it shows you were tenacious enough to keep going when things got rough over the long haul. But it really says nothing about how fast or how well you competed. Much less does it say if you’re ready to compete in a football match, or a tennis match, or a game of tiddlywinks – a fair analogy when you consider that my PhD would in theory entitle me to submit papers across a range of fields about which I know nothing. PhD possession is not, in my experience, a reliable indicator of who to listen to in this field. Far better then to actually listen to them (for a while at least)!
[caption id="attachment_1124" align="alignright" width="299"]F1000 Image courtesy of F1000


F1000Research operates a model of post-publication peer review. When papers are submitted, they are subject only
to what F1000 terms “initial objective checks” (an author’s PhD possession evidently amongst them) before being immediately published. Only then are peer reviewers sought, who then go on to review in a transparent process of open identities and open reports. This innovative system has many advantages of transparency, but it is safe to say that this episode shows it still has a few bugs.
I have to admit to being puzzled as to why F1000 do not have editors give an initial assessment (as is usual in publishing, as well as with some pre-print servers like ArXiv) to make sure it is only the work, rather than the worker, that is being judged? My guess is that this is for the same two reasons as all attempts to perform assessment via metrics: (1) money, that having a brute checklist of “objective” formal criteria was cheaper than having someone organise subjective assessments of the work itself, (2) “objectivity”, the ongoing quest to remove the “social” from science (it being seen as an affront that "objective" science should rest on subjective decisions by humans). I tend to think the former motivation might be dominant (as with other moves to metricise academic assessment).

The bigger question, however, is why F1000 feels the need to continue to act as a gatekeeper in the way made evident in the above tweet that: “We need line somewhere or literally anyone can publish. We try to make that line as objective as pos[sible]”
Do we really need such a line? Why?
The costs of printing on paper and transporting it to readers used to mean publication space was precious. With the Internet this is simply not so. So why not just open the floodgates and allow everyone who wants to put their manuscript onto the platform and invite peer review to do so? Perhaps this is again related to costs – although F1000Research charges APCs to cover its publication costs, I think I’m right in saying that they’ve also been pretty generous in offering waivers (in a bid to seed the system). It stands to reason that many of those applying for such waivers would be early career researchers, and hence F1000 might want some filter on those applications (again, this is only my guess and I’ll be happy to be corrected/enlightened). Perhaps also F1000Research is conscious of playing to the canard that OA journals will publish anything or they want to make sure F1000 is not flooded with low-quality submissions before it has had a chance to earn a name as a publisher of quality science among more traditional researchers?

Even if such motivations are in play, I still don’t think they make sense. To my mind the elegance of post-publication peer review is to “publish first, then filter” – everything gets a chance to enter the scientific conversation and is then assessed in public. If it’s garbage, then it’ll be flagged with negative review reports and hence “muted” from the conversation in the future through advanced filtering and discovery systems. Moreover, there is reason to think that we don’t need to worry so much about bad submissions. Ulrich Poschl of Atmospheric Chemistry & Physics, an innovator in this area, has attributed the high-acceptance rates of that journal to people submitting stronger first drafts since they know they will be assessed in public.

Happily, F1000 have already moved quickly away from their initial defensiveness to indicate that they will reassess this policy:
https://twitter.com/F1000Research/status/755100421564268544

F1000 are to be credited for this response. I hope they see sense and implement a better workflow to ensure that it is only the work and not the status of the person or institution that is being assessed. To do the latter is anathema to open science.
NB. This post reflects the author's personal opinion, not necessarily that of OpenAIRE.
×
Stay Informed

When you subscribe to the blog, we will send you an e-mail when there are new updates on the site so you wouldn't miss them.

Related Posts

 

Comments

No comments made yet. Be the first to submit a comment
20 May 2024

Captcha Image

OpenAIRE
flag black white lowOpenAIRE-Advance receives
funding from the European 
Union's Horizon 2020 Research and
Innovation programme under Grant
Agreement No. 777541.
  Unless otherwise indicated, all materials created by OpenAIRE are licenced under CC ATTRIBUTION 4.0 INTERNATIONAL LICENSE.