Switch Language
Wednesday 10 November 2021

Open science promotes the idea that citizens should have the same access to information as researchers.

Article by Olivier Pourret, UniLaSalle

From pandemics to climate change, including automation and big data, the challenges of the century are immense and imply, in order to respond to them in the best possible way, that science be open to all. It seems essential that citizens have the same access to information as researchers, and that scientists have access to interconnected, high-quality knowledge repositories to advance our understanding of the world and democratize knowledge.

These are certainly guiding principles of the open science movement, which believes that sustainability and inclusion are essential to it and can be fostered by shared practices, infrastructure, and funding models that ensure the equitable participation of scientists from less advantaged institutions and countries in the pursuit of knowledge and progress.

We must ensure that the benefits of science are shared between scientists and the general public without restriction. But how do we do this? Part of the answer lies in building national science systems capable of sharing and improving a diversity of knowledge.

The predominance of scientific articles in English as well as the still too low number of open access publications may be due to the greater weight attributed to these publications during evaluations.

 


    - Olivier Pourret (@olivier_pourret) August 23, 2021

 

As a result, the relevance of the research reported in these publications to local communities may be questioned due to language barriers. Here are some open science practices to transform the current regulations related to evaluation to make them more in line with the performance and impact of research.

As a reminder, in France, the High Council for the Evaluation of Research and Higher Education (HCERES) is in charge of evaluating research products and activities.

 

Reaching the right audience

The first step in ensuring that our work reaches the right audience is to make it widely accessible, as with open access to publications. But accessibility does not mean that our target audience will "see" our work. There are thousands of journals available and no one has the time or resources to read every publication.

The second step is to create a community and engage the general public. Online communication methods (e.g., Twitter, Reddit, Facebook, blogs) have often had a bad reputation in scientific circles and are generally not perceived as scholarly.

Yet, these platforms would be a formidable tool for the transmission of research. It can be something as simple as writing a blog post (like on Echosciences), for The Conversation, participating in a science communication podcast, tweeting our latest discoveries or simply drawing a cartoon or a scientific sketch.


    - The Conversation France (@FR_Conversation) July 7, 2020


It's important that the knowledge we produce can quickly reach the people it's relevant to. That's why engaged researchers are often visible in public rather than academic circles: they are frequently invited into traditional mass media, such as newspapers, radio, and television, and are happy to give popular academic presentations to non-experts.

For example, together with 6 researchers from 6 countries (China, France, South Africa, the United States, Great Britain, and Indonesia), we recently wrote a paper on the practices of open access to earth science publications. We relayed our work through a blog post in Indonesian, a second one in English, transcribed the main results on Wikipedia in French, relayed on social networks in China via Sina Weibo and in Asia more widely via WeChat and internationally via Facebook, Twitter and Reddit.

The most important thing is that our knowledge is dispersed and gets to where it can be understood and used.

 

Changing the criteria for evaluation

The development of open science also raises the question of the evaluation of research and researchers. Indeed, its implementation requires that all research processes and activities be taken into account in the evaluation and not only articles published in international peer-reviewed journals.

However, we must be careful not to fall into the trap of evaluation, as some of our colleagues and I have pointed out. How can we expect to benefit from an evaluation of research based on the number of articles, on the citations of these same articles if the characteristics of the citations and the analysis of the citations only reflect the citation of our work in scientific articles and not the direct impact of our research, especially towards the general public?


    - The Conversation France (@FR_Conversation) February 19, 2020


The San Francisco Declaration on Research Evaluation (Dora), published in 2012, and the Leiden Manifesto, published in 2015, aim to improve evaluation practices, notably by alerting to the misuse of certain bibliometric indicators in the context of recruitment, promotion or individual evaluation of researchers.

To date, more than 2,300 organizations from over 90 countries have signed the declaration, including 57 in France, and more than 18,000 researchers, including more than 1,200, in France.

 

The impact factor of journals, a biased indicator

These two texts note in particular that the various stakeholders in research systems make use of two indicators that are widely criticized by the scientific community.

The San Francisco Declaration insists more particularly on the misuse of the impact factor of scientific journals (average number of citations of articles published in these journals during the previous two years). The way this indicator is calculated, which is used to indirectly measure the visibility of a journal, creates a bias in favor of certain journals and is also open to manipulation.

Moreover, it does not take into account the diversity of practices between disciplines, which can introduce bias in comparisons between scientists.

 

Too much emphasis on the number of citations

The Leiden Manifesto focuses on the h-index, proposed in 2005 by physicist Jorge Hirsch, and which became widely used very quickly.

The ambition of this composite indicator was to give a simultaneous account of the number of publications of a researcher and of their scientific impact, by counting the number of citations of the published articles.

In reality, the definition of this index, which was seduced by its simplicity, makes the number of publications the dominant variable and does not overcome the difficulty of measuring two variables (number of articles and number of citations) with a single indicator.


    - The Conversation France (@FR_Conversation) May 11, 2020


The h-index puts on the same footing a researcher with few publications, but all highly cited, and a very prolific researcher with few publications. For example, a researcher with 5 publications all cited more than 50 times will have an h-index of 5. Similarly, a researcher with 50 publications of which 5 are cited at least 5 times will have an h-index of 5.

The h-index also depends on the database from which it is calculated as we mentioned recently, because only the publications present in the database are taken into account.

In the context of open science and the publication of results for the general public, articles in local journals (in the language of the country) will not be considered and therefore the citations of these works will not be counted in this type of indicator, thus creating inequalities in the evaluation if researchers have made the effort to choose this way of dissemination.

 

Other more relevant indicators?

The San Francisco Declaration and the Leiden Manifesto not only criticize these two indicators, but also make recommendations for the use of scientometric indicators, especially in evaluation.

These recommendations focus on a number of issues: the need to end the use of journal-based indicators, such as impact factors, in funding, appointments and promotions; the need to evaluate research on its intrinsic value rather than on the basis of the journal in which it is published; and the need to make the best use of the opportunities offered by online publishing (such as lifting unnecessary restrictions on the number of words, figures and references in articles and exploring new indicators of significance and impact).

Although evaluation has always been associated with scientific research, the frenzy of dominant criteria, such as over-publication in reputable journals, could be confronted with the transfer made to society. Finally, greater transparency should be associated with the adoption of a more diversified set of measures to evaluate researchers.


Olivier Pourret, Lecturer and researcher in geochemistry and head of scientific integrity and open science, UniLaSalle

This article is republished from The Conversation under a Creative Commons license. Read the original article.