Article

lock Open Access lock Peer-Reviewed

4

Views

SPECIAL ARTICLE

Bibliometric indexes, databases and impact factors in cardiology

Igor R C BienertI,II; Rogério Carvalho de OliveiraII; Pedro Beraldo de AndradeI; Carlos Antonio CaramoriIII

DOI: 10.5935/1678-9741.20150019

ABBREVIATIONS AND ACRONYMS

GSM: Google Scholar Metrics

IF: Impact Factor

ISI: Institute for Scientific Information

JCR: Journal Citation Report

SCI: Science Citation Index

SJR: SCImago Journal Rank

SNIP: Source Normalized Impact per Paper

WoS: Web of Science

INTRODUCTION

Although there are several ways to analysis of production, scientific impact indicators are mainly of two types: research impact indicators (such as the number of citations received) and impact indicators of the sources (the magazine itself impact factor). Such indicators are applied from the indexing of research (primary data source) in the database (secondary source)[1]. Several databases provide their assessment results of bibliometric analysis and quantification of citations. The most referenced is the Web of Science (WoS), owned by Thomson Reuters, however there are databases that also come in increasing development in citation analysis, such as Scopus, owned by Elsevier and the latest Google Scholar Metrics (GSM), owned by Google.

Impact of Bibliometric indexes

The scientific production is the final line of all academic activities and research being the instrument by which the scientific community shows the results and submit your work to the external riddle. The search result should be credible, be accessible and after published should not be modified. Also, different analyzes of primary research database should be clearly identified. In addition, the scientific production must have clear evaluation criteria such as peer-review process[2].

The importance for bibliometric indexeshas repercussions in the assignment of weight that give it research and funding institutions. In Brazil such directions from the scientific production base (Universities, CAPES, CNPq, FAPESP, etc.) provide an orientation on directing the publication of the scientific paper according to the indexes[3]. The most visible magazines are most read and consequently referenced, generating a higher request for papers evaluation and competition for recognition. This competition works as quality enhancer of published research and the number of citations resulting from this increases the prestige of the reporting agency and researcher. Thus, production and incentive institutions itself develops publishing rules where the brunt of the impact factor from a journal becomes criteria of production quality, closing the circle.

Impact Factor

(thomsonreuters.com/journal-citation-reports)

The IF is the most used worldwide and is available online through a paid subscription. It is a source impact indicator (e.g. periodic, journal) and determines how often this is referred to an article. It was created by Eugene Garfield in 1955 to evaluate the journals indexed in Science Citation Index (SCI), a multidisciplinary database in science and technology of the Institute for Scientific Information (ISI), founded by Garfield in 1960 and acquired in 1992 by Thomson Reuters. SCI is now offered by this database, Web of Science (WoS). This database allows researchers to identify which articles are most often cited and who cited. Moreover, the very disclosure of the database seems to increase its impact, making it more visible and giving to the publication a quality label[4].

The impact factor is calculated by setting the number of times an article from a given journal published in the previous two years was cited, divided by the total number of articles published by this journal in the same period. The Impact factor is published annually as part of the Journal Citation Report (JCR) serving as a beacon of the periodic exposure.

IF 2015 = (citations 2013 + citations 2014) / (articles 2013 + articles 2014)

However, some limitations regarding the IF refer to the calculation method itself, to enter as "citations" not only original articles, but also letters, editorials, reports and summaries, while the definition of "articles" are only original articles, review articles and reports[5], amplifying the IF using a variety of communications, including for example communications challenging the article itself. All citations have equal weight in the IF calculation. In addition, the citations counts only if it is in the own database (SCI), which is estimated to be half of the existing peer-reviewed publications[6,7] and varies according to the publication area[5], the nature of the research (articles about basic research, revisions or updates are often most cited and amplify the journal impact)[8] and even with the publication language, being difficult to obtain an IF for non-English language publications[9,10]. It is important to remember also that IF is not intended to individual article impact analysis, but for journal impact analysis. Still, the IF is often mistakenly used in academic researchers rating process[11].

Even the absolute value of the IF of a journal has less comparative value than intuitively attributed, because different areas have different volume of publications. As an example, an IF of 1.5 would not be too high in a generic cardiology journal, but it would be much in very specialized magazines (for example in a molecular diagnostic cardiology journal) because its articles are most read and cited by a small and specialized target population[12]. Also, journals dedicated to very specific areas, usually have the a higher impact due to natural restriction of the attention focus. One point still to consider is the probability of self-citations and cross citations[13] from research groups, institutions or individual authors,relevant topic in the publications universe. Despite all the criticism, the IF is the most worldwide used journal impact reference within the scientific field today.

Other impact measurement indexes and databases

The IF reigned supreme for decades in the evaluation of periodicals, however, alternative indicators of important validity have been designed with rankings with good correspondence between rating quartiles, although the choice of a specific indicator can impact a lot in the classification of a particular journal[14,15].Two other indexes also widely used are the SNIP (Source Normalized Impact per Paper) and the SJR (SCImago Journal Rank), developed respectively by professor Henk-Moed at the University of Leiden, Netherlands and professor Felix de Moya at the University of Granada, Spain. These indexes refer also to source evaluation. Unlike the IF they are based not in WoS database but in Scopus database, owned by the dutch company Elsevier, and include more publications in non-English language. These indexes seek to circumvent the criticism directed to the IF. The SNIP measures the impact of a journal citation weighing it against the total number of citations in a particular area of interest and the SJR corrects the weight of a citation according to the area of interest and the also according the reputation and quality from the journal that performed the citation.

In evaluation of databases, a study compared PubMed, Scopus, Web of Science, and Google Scholar and concluded that PubMed remains an optimal tool in biomedical electronic research. Scopus covers a wider journal range but it is currently limited to recent articles (published after 1995) compared with Web of Science. PubMed and Google Scholar have the advantage of free access and Scopus offers about 20% more coverage than Web of Science. The study refers results of relative inconsistent accuracy from Google Scholar[16].

SNIP index (Source Normalized Impact per Paper)

(http://www.journalindicators.com)

The SNIP index can be quickly checked online and it is free. SNIP is calculated in such way the citations are equalized for a specific field, correcting the variations found in the IF giving different weights to citations. So the impact of a single citation receive a higher value in little citations research areas and vice versa, becoming a more reliable indicator than the IF to compare journals across disciplines[17].

The SJR index (SCImago Journal Rank)

(http://www.scimagojr.com)

As SNIP, the SJR is also a free index. SJR incorporates the concept of quality in its construction, taking into account the weight of each journal individually in citations. Thus, the SJR indicates which journals have articles cited by the most prestigious periodicals (calculated through a PageRank algorithm), and not simply which periodicals are cited more often. A citation from a source with a higher SJR has greater weight than a citation with a lower SJR. Also, the SJR records the period of 3 years prior to the current in the calculation and reduces the influence of self-citations. Additionally, JRS counts in the denominator all articles from a journal, not just the "citable" as described in the IF. The journals classification in cardiovascular medicine is presented in Tables 1 to 3, according to each index.

 

 

 

 

 

 

H-index

H-index attempts to measure both the productivity and citation impact of the published body of work of a researcher, a research group or a institution, not primarily intended to evaluate a journal. The index is based on the set of the research most cited papers and the number of citations that they have received in other publications. Was developed in 2005 by the physicist Jorge E. Hirsch[18] and corresponds to the number of articles by a particular author with at least the same number of citations. In other words, if an author has in his body of publications 50 articles and among its most cited articles has 5 articles with 8 citations and 7 articles with 7 citations, his h-index will be 7. Besides the advantage of merge impact (citations) and production (publications) the h-index can also be applied to institutions and research groups. However, some criticism is also directed to its indiscriminate application. For example, highly cited papers have relative importance. Researchers with lower h-index can be a result of more selective publications or only a smaller time of productive career. In Table 4, there is an example for a research group: the researcher "A" has more impact papers than the researcher "D", although they have the same h-index. In addition, the h-index does not differentiate the primary authorship from co-authorship nor the citation context (e.g. defense citations from negative ones) and still is influenced by self-citations, whether from the researcher himself or from the research group.

 

 

Some variations have been created to improve the performance of the index, for example, the "M index" developed by Hirsch himself, calculated by dividing the H index by the number of years of researcher productive life[18,19].

Despite the criticism, the h-index is considered a good bibliometric indicator and preferable to the use of individual parameters such as total of published articles, total of citations or more cited articles alone. Currently the h-index is available in the Lattes Platform[20] automatically as production indicator from the Wos and Scopus databases. To researchers not included in Lattes, it can be directly found in the databases described above.

The WEBQUALIS

(http://qualis.capes.gov.br)

The CAPES (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior) is a Brazilian government agency for research support and offers its own system of journals qualification where can be found Brazilian post graduation programs scientific production. This system put the journals into groups (A1, A2, B1 to B5 and C), in descending order of qualification, also including different levels of qualification for the same publication according each research area (Figure 1). Classification criteria for the area are available on the portal itself and have as general characteristics various impact factors in global databases. Other criteria that can influence this classification are the number of articles published by the last three years, periodicity, accessibility and the publication of articles by authors from different institutions not direct related to the journal edition. Although the classification system receives criticism mainly on the referential adopted[21] and its weight given between different areas[22], it is one of the main criteria used for evaluation of post graduate programs in Brazil.

 

 

CONCLUSION

The basic understanding of the most common bibliometric tools aiming at a better understanding of its importance and limitations in the scientific production chain is vital to the researcher, journals and to institutions. Several new indicators are emerging and constantly being improved and may be more widespread in the near future. It is important an appropriate use of impact measures and the recognition of its limitations, as this practice has a direct impact on performance measures and consequently in raising of research funding.

REFERENCES

1. Michán L, Llorente-Bousquets J. [Bibliometry of biological systematics in Latin America during the twentieth century in three global databases]. Rev Biol Trop. 2010;58(2):531-45. [MedLine]

2. Halliday L. Scholarly communication, scholarly publication and status of emerging formats. Inf Res. 2001;6(4) [Accessed Mar 17 2015]. Available at: http://InformationR.net/ir/paper111.html

3. Bicas HEA, Rother ET, Braga MER. Fatores de impacto, outros índices bibliométricos e desempenhos acadêmicos. Arq Bras Oftalmol. 2002;65(2):151-2.

4. Varela D. The contribution of ISI indexing to a paper's citations: results of a natural experiment. Eur Polit Sci. 2012;12(2):245-53.

5. Fernandéz-Llimós F. Análisis de la cobertura del concepto de Pharmaceutical Careenfuentes primarias y secundarias de información. [Dissertation]. Granada: Universidad de Granada; 2003.

6. Ietto-Gillies G. The evaluation of research papers in the XXI century. The Open Peer Discussion system of the World Economics Association. Front Comput Neurosci. 2012;6:54. [MedLine]

7. Committee HoCSaT. Peer review in scientific publications [Accessed Mar 17 2015]. Available at: http://www.publications.parliament.uk/pa/cm201012/cmselect/cmsctech/856/856.pdf

8. Falagas ME, Alexiou VG. The top-ten in journal impact factor manipulation. Arch Immunol Ther Exp (Warsz). 2008;56(4):223-6. [MedLine]

9. Sevinc A. Manipulating impact factor: an unethical issue or an Editor's choice? Swiss Med Wkly. 2004;134(27-28):410. [MedLine]

10. Smith R. Journal accused of manipulating impact factor. BMJ. 1997;314(7079):463.

11. Seglen PO. Why the impact factor of journals should not be used for evaluating research. BMJ. 1997;314(7079):498-502. [MedLine]

12. Sloan P, Needleman I. Impact factor. Brit Dent J. 2000;189(1):1.

13. Van Noorden R. Brazilian citation scheme outed. Nature. 2013;500(7464):510-1. [MedLine]

14. Kianifar H, Sadeghi R, Zarifmahmoudi L. Comparison Between Impact Factor, Eigenfactor Metrics, and SCimago Journal Rank Indicator of Pediatric Neurology Journals. Acta Inform Med. 2014;22(2):103-6. [MedLine]

15. Falagas ME, Kouranos VD, Arencibia-Jorge R, Karageorgopoulos DE. Comparison of SCImago journal rank indicator with journal impact factor. FASEB J. 2008;22(8):2623-8. [MedLine]

16. Falagas ME, Pitsouni EI, Malietzis GA, Pappas G. Comparison of PubMed, Scopus, Web of Science, and Google Scholar: strengths and weaknesses. FASEB J. 2008;22(2):338-42. [MedLine]

17. Moed HF. The source normalized impact per paper is a valid and sophisticated indicator of journal citation impact. J Am Soc Inf Sci Tec. 2011;62(1):211-3.

18. Hirsch JE. An index to quantify an individual's scientific research output. Proc Natl Acad Sci U S A. 2005;102(46):16569-72. [MedLine]

19. Hirsch JE. Does the H index have predictive power? Proc Natl Acad Sci U S A. 2007;104(49):19193-8. [MedLine]

20. Brasil. Ministério da Ciência, Tecnologia e Inovação. Plataforma Lattes. CNPq, Conselho Nacional de Desenvolvimento Científico e Tecnológico; 2014 [cited 2014 Dec 10]. Available from: http://lattes.cnpq.br/

21. Rocha-e-Silva M. [Open letter to the president of CAPES: the new Qualis, which has nothing to do with the science of Brazil.]. Pró-Fono. 2009;21(4):275-8. [MedLine]

22. Rocha-e-Silva M. [The new Qualis, or the announced tragedy]. Clinics (São Paulo). 2009;64(1):1-4. [MedLine]

No financial support.

Authors' roles & responsibilities

IRCB: Bibliographic research, data collection, writing

RCO: Bibliographic research, data collection, writing

PBA: Writing and English version

CAC: Guidance, theme proposition, general review

Article receive on Friday, February 27, 2015

CCBY All scientific articles published at bjcvs.org are licensed under a Creative Commons license

Indexes

All rights reserved 2017 / © 2024 Brazilian Society of Cardiovascular Surgery DEVELOPMENT BY