21 minutes reading time (4277 words)

Defining Open Peer Review: Part Two - Seven Traits of OPR

Defining Open Peer Review: Part Two - Seven Traits of OPR


ABSTRACT: This is part two of a series of posts describing OpenAIRE’s work to find a community-endorsed definition of “open peer review” (OPR), its features and implementations. As described in Part One, OpenAIRE collected 122 definitions of “open review” or “open peer review” from the scientific literature. Iterative analysis of these definitions resulted in the identification of seven distinct OPR traits at work in various combinations amongst these definitions:
  • Open identities: Authors and reviewers are aware of each other's identity.
  • Open reports: Review reports are published alongside the relevant article.
  • Open participation: The wider community to able to contribute to the review process.
  • Open interaction: Direct reciprocal discussion between author(s) and reviewers, and/or between reviewers, is allowed and encouraged.
  • Open pre-review manuscripts: Manuscripts are made immediately available (e.g., via pre-print servers like ArXiv) in advance of any formal peer review procedures.
  • Open final-version commenting: Review or commenting on final “version of record” publications
  • Open platforms: Review is de-coupled from publishing in that it is facilitated by a different organizational entity than the venue of publication.
In this post we will describe each of these OPR traits and their proposed advantages and disadvantages, with reference to evidence of their efficacy where available

NOTE: The data for these definitions is available here. Readers are encouraged to review the data itself - perhaps there are definitions we've missed - or definitions you think have been coded wrongly? If so, please let us know by commenting directly in the spreadsheet or using the blog comments below!

Open Identities

Open identities peer review, also known as signed peer review (Ford, 2013; Nobarany and Booth, 2015) and “unblinded review” (Monsen and Horn, 2007), is review where authors and reviewers are aware of each other's identities. Traditional peer review operates as either “single-blind”, where authors do not know reviewers’ identities, or “double-blind”, where both authors and reviewers remain anonymous. Double-blind reviewing is more common in the Arts, Humanities and Social Sciences than it is in STEM (science, technology, engineering and medicine) subjects (Walker and Rocha da Silva, 2015), but in all areas single-blind review is by far the most common model (Elsevier, 2016). A main reason for maintaining author anonymity is that it is assumed to tackle possible publication biases against authors with traditionally feminine names, from less prestigious institutions or non-English speaking regions (Budden et al., 2008; Ross et al., 2006). Reviewer anonymity, meanwhile, is presumed to protect reviewers from undue influence, allowing them to give candid feedback without fear of possible reprisals from aggrieved authors.  Various studies have failed to show that such measures increase review quality, however (Fisher et al., 1994; Godlee et al., 1998; Justice et al., 1998; McNutt et al., 1990; van Rooyen et al., 1999). As Godlee and her colleagues have said, “Neither blinding reviewers to the authors and origin of the paper nor requiring them to sign their reports had any effect on rate of detection of errors. Such measures are unlikely to improve the quality of peer review reports” (Godlee et al., 1998). Moreover, factors such as close disciplinary communities and Internet search capabilities, mean that author anonymity is only partially effective, with reviewers shown to be able to identify authors in between 26 and 46 percent of cases (Fisher et al., 1994; Godlee et al., 1998).

Proponents of open identity peer review, on the other hand, argue that it will enhance accountability, further enable credit for peer reviewers, and simply make the system fairer: “most importantly, it seems unjust that authors should be “judged” by reviewers hiding behind anonymity” (van Rooyen et al., 1999). Open identity peer review is argued, moreover, to potentially increase review quality, as it is theorised that reviewers will be more highly motivated and invest more care in their reviews if their names are attached to them, although opponents counter that signing will lead to poorer reviews, as reviewers temper their true opinions to avoid causing offence. To date studies have failed to show any great effect in either direction (McNutt et al., 1990; van Rooyen et al., 2010, 1999). However, as these studies derive from only one disciplinary area (Medicine), they cannot be said to be representative and hence further research is undoubtedly required.

Open Reports

Open reports peer review is where review reports (either full reports or summaries) are published alongside the relevant article. The main benefits of this measure is in the making available for re-use of currently invisible but potentially useful scholarly information, the increased transparency and accountability that comes with being able to examine normally behind-the-scenes discussions and processes of improvement and assessment, and the potential to further incentivize peer reviewers by making their peer review work a more visible part of their scholarly activities (thus enabling reputational credit).

Reviewing is hard work. Research Information Network reported in 2008 that a single peer review takes an average of four hours, at an estimated total annual global cost of around £1.9 billion (Research Information Network, 2008). Once an article is published, however, these reviews usually serve no further purpose except residing in a publisher’s long-term archives. Yet those reviews contain information that remains potentially relevant and useful in the here-and-now. Often works are accepted despite the lingering reservations of reviewers.  Published reports can enable readers to consider these criticisms themselves, and “have a chance to examine and appraise this process of "creative disagreement" and form their own opinions” (Peters and Ceci, 1982). Making reviews public in this way also adds another layer of quality assurance, as the reviews are open to the scrutiny of the wider scientific community. Moreover, publishing reports also aims at raising the recognition and reward of the work of peer reviewers. Adding review activities to the reviewer’s professional record is common practice; author identification systems currently also add mechanisms to host such information (e.g. via ORCID) (Hanson et al., 2016). Finally, open reports give young researchers a guide (to tone, length, the formulation of criticisms) to help them as they begin to do peer review themselves.

The evidence-base against which to judge such arguments is not great enough to enable strong conclusions, however. Van Rooyen and her colleagues found that open reports correlate with higher refusal rates amongst potential reviewers, as well as an increase in time taken to write review but no concomitant effect on review quality (van Rooyen et al., 2010). Nicholson and Alperin’s small survey, however, found generally positive attitudes “researchers … believe that open review would generally improve reviews, and that peer reviews should count for career advancement” (Nicholson and Alperin, 2016).

Open participation

Open participation peer review, also known as “crowdsourced peer review” (Ford, 2015, 2013), “community/public review” (Walker and Rocha da Silva, 2015) and “public peer review” (Bornmann et al., 2012), is review that allows the wider community to contribute to the review process. Whereas in traditional peer review editors identify and invite specific parties (peers) to review, open participation processes invite interested members of the scholarly community to participate in the review process, either by contributing full, structured reviews or shorter comments. It may be that comments are open to anybody (anonymous or registered), or some credentials might first be required (e.g., Science Open requires an ORCID profile with at least five published articles (ScienceOpen, 2014)). Open participation is often used as a complement to a parallel process of solicited peer reviews. It aims to resolve possible conflicts associated with editorial selection of reviewers (e.g., biases, closed-networks, elitism) and possibly improve the reliability of peer review by increasing the number of reviewers (Bornmann et al., 2012). Reviewers can come from the wider research community, as well as those traditionally under-represented in scientific assessment, including representatives from industry or members of special-interest groups, for example patients in the case of medical journals (c.f., Ware, 2011). This has the potential to open the pool of reviewers beyond those identified by editors to include all potentially interested reviewers (including those from outside academia), and hence potentially much increasing the number of reviewers for each publication (though in practice this is unlikely). Evidence suggests this could increase the accuracy of peer review. For example Herron (2012) produced a mathematical model of the peer review process which showed that “the accuracy of public reader-reviewers can surpass that of a small group of expert reviewers if the group of public reviewers is of sufficient size”, although only if the numbers of  reader-reviewers exceeded 50.

Criticisms of open participation routinely focus on questions of reviewers’ qualifications to comment and the incentives for doing so. As Stevan Harnad has said:  “it is not clear whether the self-appointed commentators will be qualified specialists (or how that is to be ascertained). The expert population in any given speciality is a scarce resource, already overharvested by classical peer review, so one wonders who would have the time or inclination to add journeyman commentary services to this load on their own initiative” (Harnad, 2000). Moreover, difficulties in motivating self-selecting commentators to take part and deliver useful critique have been reported. Nature, for example, ran an experiment from June to December 2006 inviting submitting authors to take part in an experiment where open participation would be used as a complement to a parallel process of solicited peer reviews. Nature judged the trial to have been unsuccessful due to the small number of authors wishing to take part (just 5% of submitting authors), the small number of overall comments (almost half of articles received no comments) and the insubstantial nature of most of the comments that were received (Fitzpatrick, 2011). At the Open Access journal Atmospheric Chemistry and Physics (ACP), which publishes pre-review discussion papers for community comments, only about one in five papers is commented upon (Pöschl, 2012). Bornmann et al., conducted a comparative content analysis of the ACP’s community comments and formal referee reviews and concluded that the latter – tending to focus more on formal qualities, conclusions and potential impact – better supported the selection and improvement of manuscripts (Bornmann et al., 2012). This all suggests that although open participation might be a worthwhile complement to traditional, invited peer review, it is unlikely to be able to fully replace it.

Open Interaction

Open interaction peer review allows and encourages direct reciprocal discussion between reviewers, and/or between author(s) and reviewers. In traditional peer review, reviewers and authors correspond only with editors. Reviewers have no contact with other reviewers, and authors usually have no opportunity to directly question or respond to reviewers’ comments. Allowing interaction amongst reviewers or between authors and reviewers, or between reviewers themselves, is hence another way to “open up” the review process, enabling editors and reviewers to work with authors to improve their manuscript. The motivation for doing so, according to Armstrong (1982), is to “improve communication. Referees and authors could discuss difficult issues to find ways to improve a paper, rather than dismissing it”.

Some journals enable pre-publication interaction between reviewers as standard (Hames, 2014; Walker and Rocha da Silva, 2015). The EMBO Journal , for example, enables “cross-peer review,” where referees are “invited to comment on each other's reports, before the editor makes a decision, ensuring a balanced review process”(EMBO Journal, 2016). At eLife, reviewers and editor engage in an “online consultation session” where they come to a mutual decision before the editor compiles a single peer review summary letter for the author to give them a single, non-contradictory roadmap for revisions (Schekman et al., 2013). The publisher Frontiers has gone a step further, including an interactive collaboration stage that “unites authors, reviewers and the Associate Editor – and if need be the Specialty Chief Editor – in a direct online dialogue, enabling quick iterations and facilitating consensus” (Frontiers, 2016).

Perhaps even more so than other areas studied here, evidence to judge the effectiveness of interactive review is scarce. Based on anecdotal evidence, Walker and Rocha da Silva (2015) advise that “[r]eports from participants are generally but not universally positive”. To the knowledge of the author, the only experimental study that has specifically examined interaction among reviewers or between reviewers and authors is that of Jeffrey Leek and his colleagues, who performed a laboratory study of open and closed peer review based on an online game and found that “improved cooperation does in fact lead to improved reviewing accuracy. These results suggest that in this era of increasing competition for publication and grants, cooperation is vital for accurate evaluation of scientific research” (Leek et al., 2011). Such results are encouraging, but hardly conclusive. There hence remains much scope for further research to determine the impact of interactivity upon the efficacy and cost of the review process.

Open pre-review manuscripts

Open pre-review manuscripts is review where manuscripts are immediately openly accessible (via the Internet) in advance, or in synchrony with, any formal peer review procedures. Subject specific “preprint servers” like arXiv.org and bioRxiv.org, institutional repositories, catch-all repositories like Zenodo.org or Figshare.com and some publisher-hosted repositories (like PeerJ Preprints) allow authors to short-cut the traditional publication process and make their manuscripts immediately available to everyone. This can be used as a complement to a more traditional publication process, with comments invited on preprints and then incorporated into redrafting as the manuscript goes through traditional peer review with a journal. Alternatively, services which overlay peer-review functionalities on repositories can produce functional publication platforms at reduced cost (Boldt, 2011; Perakakis et al., 2010). The mathematics journal Discrete Analysis, for example, is an overlay journal whose primary content is hosted on the arXiv (Gowers, 2015). The recently released Open Peer Review Module for repositories, developed by Open Scholar in association with OpenAIRE, is an open source software plug-in which adds overlay peer review functionalities to repositories using the DSpace software (Open Scholar, 2016). Another innovative model along these lines is that of ScienceOpen, which ingests articles metadata from preprint servers, contextualizes them by adding altmetrics and other relational information, before offering authors peer review.

In other cases manuscripts are submitted to publishers in the usual way, but made immediately available online (usually following some rapid preliminary review or “sanity check”) before the start of the peer review process. This approach was pioneered with the 1997 launch of the online journal Electronic Transactions in Artificial Intelligence (ETAI), where a two-stage review process was used. First, manuscripts were made available online for interactive community discussion, before later being subject to standard anonymous peer review. The journal stopped publishing in 2002 (Sandewall, 2012). Atmospheric Chemistry and Physics uses a similar system of multi-stage peer review, with manuscripts being made immediately available as “discussion papers” for community comments and peer review (Pöschl, 2012). Other prominent examples are F1000Research and the Semantic Web Journal.

The benefits to be gained from open pre-review manuscripts is that researchers can assert their priority in reporting findings – they needn’t wait for the sometimes seemingly endless peer review and publishing process, during which they live in constant fear of being scooped. Moreover, getting research out earlier increases its visibility, enables open participation in peer review (where commentary is open to all), and perhaps even, according to Pöschl (2012), increases the quality of initial manuscript submissions.

Open final-version commenting

Open final-version commenting is review or commenting on final "version of record" publications. If the purpose of peer review is to assist in the selection and improvement of manuscripts for publication, then it seems illogical to suggest that peer review can continue once the final version-of-record is made public. Nonetheless, in a literal sense, even the declared fixed version-of-record continues to undergo a process of improvement (occasionally) and selection (perpetually).

As with most areas of communication, the Internet has hugely expanded the range of effective action available for readers to offer their feedback on scholarly works. Where before only formal routes like the letters to the journal or commentary articles offered readers a voice, now a multitude of channels exist. Journals are increasingly offering their own commentary sections. Walker and Rocha da Silva found that of 53 publishing venues reviewed, 24 provided facilities to enable user-comments on published articles – although these were typically not heavily used (Walker and Rocha da Silva, 2015). Researchers seem to see the worth of such functionalities, with almost half of respondents to a 2009 survey believing supplementing peer review with some form of post-publication commentary to be beneficial (Mulligan et al., 2013). But users can “publish” their thoughts anywhere on the Web – via academic social networks like MendeleyResearchGate and Academia.edu, via Twitter, or on their own blogs. The reputation of a work hence undergoes continuous evolution as long as it remains the subject of discussion.

Improvements based on feedback happen most obviously in the case of so-called ‘living’ publications like the Living Reviews group of three disciplinary journals in the fields of relativity, solar physics and computational astrophysics, publishing invited review articles which allow authors to regularly update their articles to incorporate the latest developments in the field. But even where the published version is anticipated to be the final version, it remains open to future retraction or correction. These days such changes are fueled by social media, as in the 2010 case of #arseniclife, where social media critique over flaws in the methodology of a paper claiming to show a bacterium capable of growing on arsenic resulted in refutations being published in Science. The Retraction Watch blog is dedicated to publicizing such cases.

A major influence here has been the independent platform Pubpeer, which proclaims itself a “post-publication peer review platform”. When its users swarmed to critique a Nature paper on STAP (Stimulus-Triggered Acquisition of Pluripotency) cells, PubPeer argued that its “post-publication peer review easily outperformed even the most careful reviewing in the best journal. The papers’ comment threads on PubPeer have attracted some 40000 viewers. It’s hardly su[r]prising they caught issues that three overworked referees and a couple of editors did not. Science is now able to self-correct instantly. Post-publication peer review is here to stay” (PubPeer, 2014).

Open platforms

Open platforms peer review is review facilitated by a different organizational entity than the venue of publication. Recent years have seen the emergence of a group of dedicated platforms which aim to augment the traditional publishing ecosystem by de-coupling review functionalities from journals. Services like RUBRIQ (http://www.rubriq.com), Axios Review (https://axiosreview.org) and Peerage of Science (https://www.peerageofscience.org/) offer “portable” or “independent” peer review. Each platform invites authors to submit manuscripts directly to them, then organises review amongst their own community of reviewers and returns review reports. In the case of RUBRIQ and Peerage of Science, participating journals then have access to these scores and manuscripts and so can contact authors with a publishing offer or to suggest submission. Axios meanwhile, directly forwards the manuscript, along with reviews and reviewer identities, to the author’s preferred target journal. The models vary in their details – RUBRIQ, for example, pays its authors, whereas Peerage of Science and Axios operate on a community model where reviewers earn discounts on having their own work reviewed and Peerage of Science is totally free for authors and recovers its costs from publishers – but all aim in their ways to reduce inefficiencies in the publication process, especially the problem of duplication of effort. Whereas in traditional peer review, a manuscript could undergo peer review at several journals, as it is submitted and rejected, then submitted elsewhere, such services need just one set of reviews which can be carried over to multiple journals until a manuscript finds a home (hence “portable” review).

Other decoupled platforms aim at solving different problems. Publons (https://publons.com/) seeks to address the problem of incentive in peer review by turning peer review into measurable research outputs. Publons collects information about peer review from reviewers and publishers to produce reviewer profiles which detail verified peer review contributions that researchers can add to their CVs. Overlay journals like Discrete Mathematics, discussed above, are another example of open platforms. Peter Suber (quoted in Cassella and Calvi, 2010) defines the overlay journal as ‘‘An open-access journal that takes submissions from the preprints deposited at an archive (perhaps at the author’s initiative), and subjects them to peer review…. Because an overlay journal doesn’t have its own apparatus for disseminating accepted papers, but uses the pre-existing system of interoperable archives, it is a minimalist journal that only performs peer review.’’ Finally, there are the many venues through which readers can now comment on already-published works (see also “open final version commenting” above), including blogs and social networking sites, as well as dedicated platforms such as PubPeer (https://pubpeer.com/).
In part three of this series, we analyse the distribution of these traits amongst our corpus of 122 definitions to produce a provisional definition of “open peer review”,  going on to discuss the process of revision of this definition in response to feedback as part of our recent OPR survey of more than 3000 authors, editors and reviewers. The final aim is a standard, community-endorsed definition of OPR which allows us all to avoid future ambiguities in discussion about and research into open peer review.

References

Armstrong, J.S., 1982. Barriers to scientific contributions: The author’s formula. Behav. Brain Sci. 5, 197–199. doi:10.1017/S0140525X00011201

Boldt, A., 2011. Extending ArXiv. org to achieve open peer review and publishing. J. Sch. Publ. 42, 238–242.

Bornmann, L., Herich, H., Joos, H., Daniel, H.-D., 2012. In public peer review of submitted manuscripts, how do reviewer comments differ from comments written by interested members of the scientific community? A content analysis of comments written for Atmospheric Chemistry and Physics. Scientometrics 93, 915–929. doi:10.1007/s11192-012-0731-8

Budden, A.E., Tregenza, T., Aarssen, L.W., Koricheva, J., Leimu, R., Lortie, C.J., 2008. Double-blind review favours increased representation of female authors. Trends Ecol. Evol. 23, 4–6. doi:10.1016/j.tree.2007.07.008

Cassella, M., Calvi, L., 2010. New journal models and publishing perspectives in the evolving digital environment. IFLA J. 36, 7–15. doi:10.1177/0340035209359559

Elsevier, 2016. What is peer review? [WWW Document]. URL https://www.elsevier.com/reviewers/what-is-peer-review (accessed 8.22.16).

EMBO Journal, 2016. About | The EMBO Journal [WWW Document]. URL http://emboj.embopress.org/about (accessed 8.24.16).

Fisher, M., Friedman, S.B., Strauss, B., 1994. The effects of blinding on acceptance of research papers by peer review. JAMA 272, 143–146.

Fitzpatrick, K., 2011. Planned Obsolescence. NYU Press, New York, NY.

Ford, E., 2015. Open peer review at four STEM journals: an observational overview. F1000Research 4.

Ford, E., 2013. Defining and Characterizing Open Peer Review: A Review of the Literature. J. Sch. Publ. 44, 311–326. doi:10.3138/jsp.44-4-001

Frontiers, 2016. About Frontiers | Academic Journals and Research Community [WWW Document]. URL http://home.frontiersin.org/about/review-system (accessed 8.24.16).

Godlee, F., Gale, C.R., Martyn, C.N., 1998. Effect on the quality of peer review of blinding reviewers and asking them to sign their reports: a randomized controlled trial. JAMA 280, 237–240.

Gowers, T., 2015. Discrete Analysis — an arXiv overlay journal. Gowerss Weblog.

Hames, I., 2014. The changing face of peer review. Sci. Ed. 1, 9–12. doi:10.6087/kcse.2014.1.9

Hanson, B., Lawrence, R., Meadows, A., Paglione, L., 2016. Early adopters of ORCID functionality enabling recognition of peer review: Two brief case studies: Early adopters of ORCID peer review functionality. Learn. Publ. 29, 60–63. doi:10.1002/leap.1004

Harnad, S., 2000. The invisible hand of peer review [WWW Document]. Exploit Interact. URL http://cogprints.org/1646/ (accessed 8.24.16).

Herron, D.M., 2012. Is expert peer review obsolete? A model suggests that post-publication reader review may exceed the accuracy of traditional peer review. Surg. Endosc. 26, 2275–2280. doi:10.1007/s00464-012-2171-1

Justice, A., Cho, M., Winker, M., Berlin, J., Rennie, D., 1998. Does masking author identity improve peer review quality?: A randomized controlled trial. JAMA 280, 240–242. doi:10.1001/jama.280.3.240

Leek, J.T., Taub, M.A., Pineda, F.J., 2011. Cooperation between Referees and Authors Increases Peer Review Accuracy. PLOS ONE 6, e26895. doi:10.1371/journal.pone.0026895

McNutt, R., Evans, A., Fletcher, R., Fletcher, S., 1990. The effects of blinding on the quality of peer review: A randomized trial. JAMA 263, 1371–1376. doi:10.1001/jama.1990.03440100079012

Monsen, E.R., Horn, L.V., 2007. Research: Successful Approaches. American Dietetic Associati.

Mulligan, A., Hall, L., Raphael, E., 2013. Peer review in a changing world: An international study measuring the attitudes of researchers. J. Am. Soc. Inf. Sci. Technol. 64, 132–161. doi:10.1002/asi.22798

Nicholson, J., Alperin, J.P., 2016. A brief survey on peer review in scholarly communication. The Winnower.

Nobarany, S., Booth, K.S., 2015. Use of politeness strategies in signed open peer review: Use of Politeness Strategies in Signed Open Peer Review. J. Assoc. Inf. Sci. Technol. 66, 1048–1064. doi:10.1002/asi.23229

Open Scholar, 2016. Open access repositories start to offer overlay peer review services [WWW Document]. Open Sch. CIC. URL http://www.openscholar.org.uk/institutional-repositories-start-to-offer-peer-review-services/ (accessed 8.25.16).

Perakakis, P., Taylor, M., Mazza, M., Trachana, V., 2010. Natural selection of academic papers. Scientometrics 85, 553–559. doi:10.1007/s11192-010-0253-1

Peters, D.P., Ceci, S.J., 1982. Peer-review practices of psychological journals: The fate of published articles, submitted again. Behav. Brain Sci. 5, 187–195. doi:10.1017/S0140525X00011183

Pöschl, U., 2012. Multi-stage open peer review: scientific evaluation integrating the strengths of traditional peer review with the virtues of transparency and self-regulation. Front. Comput. Neurosci. 6, 33. doi:10.3389/fncom.2012.00033

PubPeer, 2014. Science self-corrects – instantly. PubPeer Online J. Club.

Research Information Network, 2008. Activities, costs and funding flows in the scholarly communications system in the UK: Report commissioned by the Research Information Network (RIN).

Ross, J., Gross, C., Desai, M., 2006. EFfect of blinded peer review on abstract acceptance. JAMA 295, 1675–1680. doi:10.1001/jama.295.14.1675

Sandewall, E., 2012. Maintaining Live Discussion in Two-Stage Open Peer Review. Front. Comput. Neurosci. 6. doi:10.3389/fncom.2012.00009

Schekman, R., Watt, F., Weigel, D., 2013. The eLife approach to peer review. eLife 2, e00799. doi:10.7554/eLife.00799

ScienceOpen, 2014. Peer Review Guidelines [WWW Document]. Sci. URL http://about.scienceopen.com/peer-review-guidelines/ (accessed 10.17.16).

van Rooyen, S., Delamothe, T., Evans, S.J.W., 2010. Effect on peer review of telling reviewers that their signed reviews might be posted on the web: randomised controlled trial. BMJ 341, c5729–c5729. doi:10.1136/bmj.c5729

van Rooyen, S., Godlee, F., Evans, S., Black, N., Smith, R., 1999. Effect of open peer review on quality of reviews and on reviewers’ recommendations: a randomised trial. BMJ 318, 23–27. doi:10.1136/bmj.318.7175.23

Walker, R., Rocha da Silva, P., 2015. Emerging trends in peer review - a survey. Front. Neurosci. 9. doi:10.3389/fnins.2015.00169

Ware, M., 2011. Peer review: recent experience and future directions. New Rev. Inf. Netw. 16, 23–53.
×
Stay Informed

When you subscribe to the blog, we will send you an e-mail when there are new updates on the site so you wouldn't miss them.

Related Posts

 

Comments

No comments made yet. Be the first to submit a comment
19 May 2024

Captcha Image

OpenAIRE
flag black white lowOpenAIRE-Advance receives
funding from the European 
Union's Horizon 2020 Research and
Innovation programme under Grant
Agreement No. 777541.
  Unless otherwise indicated, all materials created by OpenAIRE are licenced under CC ATTRIBUTION 4.0 INTERNATIONAL LICENSE.