Skip to Main Content

InCites Benchmarking & Analytics: I am an Academic Administrator

InCites Benchmarking & Analytics is a research analytics tool.

Institutional Analysis Guides

External Bibliometrics Resources

Training options

Learning options

Technical Support

Technical Support

Understanding the Metrics

Web of Science Documents:  The total number of Web of Science Core Collection papers for that entity. This count includes all document types, and it is a measure of productivity.

Times Cited:  The number of times a set of Web of Science Documents have been cited. This indicates the total influence of a set of publications. This metric is NOT normalized to take into account differing citation patterns by field or the size of an institution/entity.

h-index:  This metric was introduced by Dr. J.E. Hirsch in 2005 as a way to combine productivity (number of documents) and impact (number of citations) in one metric. Read the full paper here. To calculate the h-index for a set of publications, arrange the publications from most to least cited on a graph.

The h-index represents the point on the slope where the number of citations = number of papers. So, h = the number of papers that have received at least h citations. For example, if you have an h-index of 20, it means you have authored 20 papers that have each been cited at least 20 times. 

h-index graphAppropriate use:  When using the h-index, make sure that you are comparing similar researchers. H-index is not a normalized measure, and you will want to take  into account two factors:  time and discipline.

Time:  The h-index is a time-dependent measure. It is proportional to the length of a researcher’s career and how many articles he/she has published. Early career researchers would be at a disadvantage when compared to more senior researchers, because the latter would have had more time to produce more work and receive more citations to their output.

Discipline:  Different academic disciplines have different patterns of citation activity, which means that a good h-index will differ by field. Comparing a humanities scholar to a clinical medicine researcher using h-index would be an unfair comparison.

What is normalization? A paper's citation count won't tell you the whole story by itself. When was the paper published? What type of document is it? How frequently does work in that field typically get cited? Normalization puts citation counts in context. Normalized indicators show you how a paper or group of papers performs relative to averages or baselines. 

Normalize for discipline, time, and document type

Category Normalized Citation Impact: The Category Normalized Citation Impact (CNCI) of a document is calculated by dividing an actual citation count by an expected citation rate for documents with the same document type, year of publication, and subject area. When a document is assigned to more than one subject area, the harmonic average is used. The CNCI of a set of documents is the average of the CNCI values for all of the documents in the set.

Example:  A Plant Sciences article published in 2014 has been cited 46 times. Is that good, bad, or average performance? First we'll need to calculate a baseline for comparison, which is an expected citation rate for the paper.

Category Expected Citations = Average cites to items of the same document type (article), year (2014), and category (Plant Sciences) = 2.32

Category Normalized Citation Impact = Actual Citations / Category Expected Citations = 46 / 2.32 = 19.82

CNCI is an unbiased indicator of impact irrespective of age, subject focus, or document type. Therefore, it allows comparisons between entities of different sizes and different subject mixes. A CNCI value of one represents performance at par with world average, values above one are considered above average, and values below one are considered below average. A CNCI value of two is considered twice the world average.

Journal Normalized Citation Impact: The Journal Normalized Citation Impact (JNCI) indicator is a similar indicator to the Category Normalized Citation Impact, but instead of normalizing for subject area or field, it normalizes for the journal, in which the document is published. 

The JNCI indicator can reveal information about the performance of a publication (or a set of publications) in relation to how other researchers perform when they publish their work in a given journal (or a set of journals). If the JNCI value exceeds one, then the assessed research entity is performing above average.

Percentile in Subject Area:  The percentile of a publication is determined by creating a citation frequency distribution for all the publications in the same year, subject category, and of the same document type (arranging the papers in descending order of citation count), and determining the percentage of papers at each level of citation, i.e., the percentage of papers cited more often than the paper of interest. If a paper has a Percentile in Subject Area of 1%, then 99% of the papers in the same subject category, year, and of the same document type have a citation count that is lower:  smaller percentile = better performance. The Percentile in Subject Area is available from the document list view for all papers in the InCites dataset.

Average Percentile:  For any set of papers, an Average Percentile can be calculated as the mean of the percentiles of all of the papers in the set. If a paper is assigned to more than one category, the category in which the percentile value is closest to zero is used, i.e. the best performing value. 

Other InCites metrics help you identify what percentage of papers in a set are performing at a particular level of citation.   

  • % Documents in Top 1% 
    • percentage of papers from a set that have been cited enough times to place them in the top 1% or better (when compared to papers in the same category, year, and of the same document type) 
  • % Documents in Top 10%
    • percentage of papers from a set that have been cited enough times to place them in the top 10% 

International Collaborations: This indicator shows the number of publications in a set with at least two different countries among the affiliations of the co-authors.

% International Collaborations: This indicator shows the percentage of publications in a set that have international co-authors.

% Industry Collaborations: Industry collaborations are papers with at least one corporate author affiliation. This indicator shows the percentage of publications in a set that have a corporate co-author. 

When looking at the entities that collaborate with your organization, a Collaborations visualization is available:

collaboration network visualization

Institutional Analysis Videos

Explore Organizations: Introduction

Explore Organizations: Part 2

Identifying Peer Institutions

Collaboration Videos

Evaluating Institutional Collaborations

Evaluating Regional Collaborations