Document Type : Articles

Authors

Department of Knowledge & Information Sciences, Faculty of Education & Psychology Shiraz University, Shiraz, Iran

Abstract

To test the reasonability of applying journal-specific indicators with the purpose of evaluating individual researchers, the present study attempted to examine the structural similarities between journal-evaluation indicators (i.e. JIF, SNIP and SJR) and author-evaluation ones (i.e. publication counts, citation per paper, and H and G indices) through factor analysis. The Iranian papers having published in SCI in 2008 were chosen as the corpus of this study to be analyzed. The results showed that the author- and journal-evaluation indicators belong to two totally different factor groups, and share no structures. On this basis, one may conclude that what the journal- evaluation indices evaluate is completely different from what the author-level ones do. It would be, therefore, illogical to use these two groups of indices interchangeably and for purposes they have not been designed for. Otherwise, consistent results cannot be expected to come out of such endeavors.

  1. Abramo, G., & D'Angelo, C. A. (2014). Research evaluation: improvisation or science? Retrieved Sep 12, 2014 from http://www.portlandpress.com/pp/books/online/wg87/087/ 0055/0870055.pdf.
  2. Amani, M. & Baba-Ahmadi, A. (2005). The inefficiency of JIF in evaluating papers and research outcpmes. Rahyaft [Persian], 15 (36): 70-76.
  3. Butler, L. (2008). Using a balanced approach to bibliometrics: quantitative performance measures in the Australian Research quality Framework. Ethics in Science and Environmental Politics ESEP, (8).
  4. ‏Cole, S. & Cole, J. R. (1967). Scientific output and Recognition: Astudy in the operation of the reward system in Science. American Sociological Review, 32(3), 377-390.
  5. Costas, R. & Bordons, M. (2007). The h-index: Advantages, limitations and its relation withother bibliometric indicators at the micro level Journal of Informetrics, (1), 193–203.
  6. Costas, R .& Bordons, M. (2008). Is g-index better than h-index? An exploratory study at the individual level. Scientometrics, (77)2, 267–288.
  7. Cronin, B. & Meho, L. (2006). Using the h-index torank influential information scientists. Journal of the American Society for InformationScience and Technology, 57(9), 1275-1278.
  8. Davari Ardakani, R. (2007). The delusion of scientific development via increasing ISI papers. Iran Newspaper [Persian], 3710, 10.
  9. Glanzel, W. (2006a). On the opportunities and limitations of the H-index. Science Focus, Library of Chinese Academy of Sciences, 10-11. (Published)[Newspaper/Magazine article].
  10. Glanzel, W. (2006b). On the h-index - A mathematical approach to a new measure of publication activity and citation impact. Scientometrics, 67 (2), 315-321.
  11. Harnad, S. (2008). Validating Research Performance Metrics Against Peer Rankings, Ethics in Science and Environmental Politics, 8 (11).
  12. Harzing, A. W. & Van der Wal, R. (2009). A Google Scholar h-index for journals: An alternative metric to measure journal impact in economics and business. Journal of the American Society for Information Science and Technology, 60(1), 41-46.
  13. Hirsch, J. E. (2005). An index to quantify an individual’s scientific research Output. Proceedings of the National Academy of Sciences of the UnitedStates of America, 102(46), 16569–16572.
  14. Katz, J. S. (1999). Bibliometric Indicators and the Social Sciences, prepared for the ESRC. Available online at: http://arizona.openrepository.com/arizona/bitstream/10150/105920/1/ ESRC.pdf
  15. Kelly, C. D. & Jennions, M. D. (2006). The h index and career assessment by numbers. Trends in Ecology and Evolution, 21(4):167-170.
  16. Leydesdorff, L. (2009). How are new citation-based journal indicators adding to the bibliometric toolbox? Journal of the American Society for Information Science & Technology, 60, 1327–1336.
  17. Lightfield, E. T. (1971). Output and Recognition of Sociologists. The American Sociologist, 6(2), 128-133.
  18. Moed, H. F. (2010). The Source-Normalized Impact per Paper (SNIP) is a valid and sophisticated indicator of journal citation impact. Digital libraries, cite as: arXiv: 1005. 4906v1
  19. Ortner, H. M. (2010). The impact factor and other performance measures-much used with little knowledge about. International Journal of Refractory Metals and Hard Materials, 28(5), 559-566.
  20. Pendlebury, D. A. (2009). The use and misuse of journal metrics and other citation indicators. Archivum Immunologiae et Therapiae Experimentalis, 57(1), 1-11.
  21. Saad, G. (2006). Exploring the h-index at the author and journal levels using bibliometric data of productive consumer scholars and business-related journals respectively. Scientometrics, 69(1), 117-120.
  22. Sadeghi, R. & Sarraf Shirazi, A. (2012). Comparison between Impact factor, SCImago journal rank indicator and Eigenfactor score of nuclear medicine journals. Nuclear Medicine Review, 15(2), 132–136
  23. Schreiber, M. (2010). Revisiting the g-Index: The average number of citations in the g-core. Journal of the American Society for Information Science and Technology, 61(1),169-174.
  24. Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research. British Medical Journal, (314),498-502. Retrieved 12 Sep 2013 from http://bmj. bmjjournals. com/cgi/content/full/314/7079/497.
  25. Tol, R. S. J. (2009). The h-index and its alternatives: An application to the 100 most prolific economists. Scientometrics, 80(2), 317-324.
  26. Vanclay, J. K. (2008). Ranking forestry journals using the h-index. Journal of informetrics, 2(4), 326-334.
  27. Van Raan, A. F. J. (2006). Comparison of the Hirsch-index with standard bibliometric indicators and with peer judgment for 147 chemistry research groups. Scientometrics, 67(3), 491-502.
  28. Zitt, M.; Bassecoulard, E. (2008). Challenges for scientometric indicators: data demining, knowledge-flow measurements and diversity issues. Ethics in Science and environmental Politics Ethics Sci Environ Polit, (8), 49–60.