DOI: https://doi.org/10.47989/ir31154705
Introduction. The study presents a thorough statistical analysis on the Science and Technology Index (SINTA), a journal ranking system established by the Indonesian Ministry of Higher Education. SINTA ranks Indonesian journals into six tiers (S1 to S6) based on scientific content and administrative compliance. The system differs from widely accepted quartile-based ranking system which ranks journal purely from citation perspective. This study assesses whether SINTA reflects journal quality when contrasted to the globally recognized citation-based indicators.
Method. The H5-index and Impact Factor distributions data of 10,478 Indonesian journals spanning over ten subject areas as of November 2024 were retrieved. Statistical analysis was conducted using Kruskal-Wallis and Dunn’s pairwise tests to detect and identify statistically distinguishable ranking clusters.
Analysis. The study analysed whether the six SINTA rankings (S1 – S6) corresponds to six distinct clusters when mapped out based on the H5-index and Impact Factor distributions. All analysis was performed on SPSS.
Results. The analysis showed that there are at most four statistically distinguishable groups, based on the H5-index and Impact distributions, instead of six independent groups as indicated by the six SINTA rankings. A substantial portion of non-adjacent SINTA rankings exhibited indistinguishable citation distributions. In some subject areas, all six SINTA rankings showed statistically indistinguishable citation-based distributions, indicating that SINTA fails to reflect citation-based quality.
Conclusions. Findings indicate that SINTA is not a reliable measure of journal quality and suggest adopting a more citation-based framework aligned with global practice.
Assessing journal quality has always been a key aspect in academic publishing in the scientific community. Publishing in reputable journals serves two purposes: First, it allows researchers to disseminate findings, methods, and discoveries (Robinson, 2024; Sánchez-García et al., 2024), which is an important part of advancing knowledge. in the field. Second, it is one of the key aspects in developing a researcher’s portfolio for career advancement (Aprile et al., 2021; Heffernan, 2021). To evaluate journal quality, the scientific community widely relies on citation metrics to measure the impact and visibility of a scientific publication within a respective field (Waltman, 2016). According to Walters (Walters, 2017), there are nineteen standard citation metrics, which are drawn from various data sources, to rank journals. Among these, Journal Impact Factor (JIF) (Garfield, 1999, 2006), a metric annually published by Clarivate Analytics in the Journal Citation Report, is one of the most established measures of journal performance. It draws data from the Web of Science collection to calculate the number of citations per total number of citable documents in the two-year period. CiteScore (CiteScore, n.d.; Teixeira da Silva, 2020), a metric developed by Elsevier, is another example of an established metric to measure the journal performance based on the Scopus database. CiteScore calculates the total number of citations in a given year to published documents in the preceding four years by the total number of documents published in that same four-year period. Similarly, SCImago Journal Rank (SJR) (González-Pereira et al., 2010) by SCImago Lab, ranks journal, also based on the Scopus database, by computing weighted citations in the three-year period. Another journal performance measure is the Source Normalized Impact per Paper (SNIP) (Moed, 2010), a journal metric that normalizes the citation counts according to disciplinary citation practices. By rescaling the citation counts with respect to the citation density in a respective field, SNIP provides a balance measure of a journal’s impact across diverse disciplines. Beyond this, the h-index (Hirsch, 2005) and its variations (Alonso et al., 2009), provide measures that integrate productivity with impact.
The journals, within a specific field, typically are ranked based on the calculated scores, for example those calculated by JIF, CiteScore, or SJR, and are categorized into the quartile ranking system. The top 25% in the group, clustered into Quartile 1 (Q1), represents the most prestigious publications with the highest citation metric scores; those fall into the lowest 25% in the group belongs to Quartile 4 (Q4), with the least prestige, having the lowest score distribution in the citation metric (Asan & Aslan, 2020). The quartile system is considered robust because it measures a long-term statistical trend rather than short-term variations. Thus, the Q1 – Q4 categorization provides a consistent measure of journal prestige and impact over time.
While widely adopted, traditional citation-based indicators have limitations. For example, summary citation metrics may mislead research evaluation due to oversimplified interpretations of complex bibliometric data (Szomszor et al., 2020). Likewise, different disciplines may exhibit unique citation practices which can lead significant variability in the metrics (Galbraith et al., 2023). Moreover, the publish or perish culture in academia encourages publishers to publish more articles, creating the risk of inflating journal metrics and potentially sacrificing quality over quantity (Hanson et al., 2024). These challenges underscore that while metrics are indispensable tools, their interpretation must be nuanced and context-sensitive.
These global critiques of traditional bibliometric highlight the need for more context-sensitive evaluation systems. In this context, the Indonesian Ministry of Research, Technology, and Higher Education launched the Science and Technology (SINTA) Index as a national platform for measuring research performance in 2017 (Antara News, 2017; Badan Strategi Kebijakan Dalam Negeri, 2017). SINTA adopted a different approach to evaluate journal quality (Lukman et al., 2018), diverging from the widely used quartile ranking system based on citation-based metrics. SINTA was introduced not only as a policy instrument to stimulate both the quality and quantity of publication in Indonesia, which at the time of launching, still lagged significantly behind other ASEAN countries such as Singapore, Malaysia, and Thailand (Fiala, 2022; Fry et al., 2023), but also as a response to the broader global challenges faced by traditional citation-based metrics. In response to this gap, SINTA was strategically situated as a national recognition and evaluation framework that expanded opportunities to publish in accredited Indonesian journals. This allows researchers to have a more accessible route to publish compared to international outlets. By consolidating accreditation and citation metrics, the initiative was expected to boost domestic publication numbers while simultaneously serving the Ministry’s broader roadmap as a policy instrument for career advancement, institutional benchmarking, and national research visibility. The broader historical and policy contexts surrounding the Indonesian research sector, which led to the introduction of SINTA has been discussed in detail by Fry et al., 2023 and we invite the interested readers to consult their work for a comprehensive review.
Empirical analysis (Fry et al., 2023) suggests that SINTA, as a policy design, has achieved its immediate goal in increasing the number of publications from Indonesian researchers. Data show a 25% increase in total publications since SINTA’s introduction. This increase was dominated by a sharp growth in conference proceedings and low-impact journals (approximately 62 – 86%), while there was only a modest increase (approximately 3%) in top tier journals with high impact. The analysis also observed that team size per paper grew by around 13% in both domestic and international co-authors. These patterns indicate that SINTA succeeded in boosting the number of publications, although the growth came primarily from low-impact journals, which raises concerns whether SINTA rankings truly reflect quality over administrative compliance.
To better understand SINTA (the Science and Technology Index), it is important to examine how SINTA assesses journal quality. Journals are evaluated based on eight criteria which are clustered into administrative (criteria 1, 2, 3, 6, 7, and 8) and scientific (criteria 4 and 5) elements (Akreditasi Jurnal Ilmiah, 2018; Lukman et al., 2018; Pedoman Akreditasi Jurnal Ilmiah, 2021). Table 1A shows that, out of a total maximum of 100 points, forty-eight points are allocated for the administrative compliance and fifty-two points for the scientific elements. SINTA ranking system groups the indexed journals into six tiers, SINTA 1 (S1) to SINTA 6 (S6), with S1 is the highest, based on their respective scores (Table 1B). While it is expected that higher score (and ranking) means (perceived) better journal quality, the scoring system dictates that a journal must have a balance in both administrative and scientific aspects to rank well in the ranking system. For example, a journal that excels in the scientific domain but lacks administrative aspects may only rank at S4. Similarly, a journal that meets and exceeds all the administrative aspects, but weak in scientific content may fall within S4 – S5.
According to the regulations set by the Ministry, a journal that accumulates a total minimum of seventy points, with twenty-six points are the contributions from the scientific element, is entitled to secure S2 ranking (Pedoman Akreditasi Jurnal Ilmiah, 2021). The twenty-six-point threshold, which represents only half of the maximum possible score in the scientific elements, raises concerns that journals with mediocre scientific strength are granted top-tier status in the SINTA ranking system. On the positive notes, the scoring system incorporates a good practice by introducing a penalty for plagiarism; a total maximum of twenty points will be deducted from the total score if the journal fails to screen out plagiarized articles.
This work aims to evaluate to what extent SINTA scoring system accurately captures the quality of journals in Indonesia. SINTA hybrid scoring system, combining administrative compliance with scientific strength with nearly equal weights, may potentially lead to distortion in assessing the journal quality. This study asks whether the S1–S6 rankings truly reflect journal performance, particularly when it is directly compared with the widely accepted citation based metrics
| A. | No | Criteria | Max score | B. | Ranking | Score | |
|---|---|---|---|---|---|---|---|
| 1 | Journal naming | 2 | SINTA 1 (S1) | 85 – 100 | |||
| 2 | Governance of the journal publisher | 4 | SINTA 2 (S2) | 70 – 84 | |||
| 3 | Journal editing and management | 19 | SINTA 3 (S3) | 60 – 69 | |||
| a | External reviewers (6) | SINTA 4 (S4) | 50 – 59 | ||||
| b | Quality of the reviews (3) | SINTA 5 (S5) | 40 – 49 | ||||
| c | Academic reputation of the external reviewers (5) | SINTA 6 (S6) | 30 – 39 | ||||
| d | Author guidelines (1) | ||||||
| e | Editing format and style (2) | ||||||
| f | Editing and submission platform (2) | ||||||
| 4 | Article content | 41 | |||||
| a | Coverage (4) | ||||||
| b | Collaboration distribution (8) | ||||||
| c | Originality (6) | ||||||
| d | Contribution to the advancement of science (1) | ||||||
| e | Impact as measured by citation (8) | ||||||
| f | Primary source citation (3) | ||||||
| g | Current citation (3) | ||||||
| h | Analysis and synthesis (5) | ||||||
| i | Conclusion (3) | ||||||
| 5 | Writing style | 11 | |||||
| a | Clarity of the title of the articles (1) | ||||||
| b | Inclusion of authors' name and affiliation (1) | ||||||
| c | Abstract (2) | ||||||
| d | Keyword (1) | ||||||
| e | Systematic writing (1) | ||||||
| f | Supporting instrument in articles (graphs, tables) (1) | ||||||
| g | Citation and bibliography (2) | ||||||
| h | Language (2) | ||||||
| 6 | Journal appearance | 7 | |||||
| a | Font size (1) | ||||||
| b | Layout (1) | ||||||
| c | Typography (1) | ||||||
| d | Document resolution (1) | ||||||
| e | Number of pages per volume (2) | ||||||
| f | Design (1) | ||||||
| 7 | Sustainability | 4 | |||||
| a | Publication schedule (1) | ||||||
| b | Issues formatting (1) | ||||||
| c | Page number (1) | ||||||
| d | Volume indexing (1) | ||||||
| 8 | Dissemination | 12 | |||||
| a | Number of visits to journal website (3) | ||||||
| b | Journal indexing (8) | ||||||
| c | Unique identifier for articles (1) | ||||||
| 9 | Penalty (plagiarism) | - 20 | |||||
Table 1. (A) The criteria that are used to evaluate the quality of a journal. (B) The scoring system that is used to categorize journals into six tiers.
It is worth mentioning that SINTA is not the only national-level bibliometric system in the world. Several countries have established similar systems as key instruments to evaluate research output, each with its own methodologies, strengths and limitations. While the structures obviously vary among countries, these systems share common grounds on enhancing visibility of country-level research, ensuring the quality of the research outputs, and structuring researchers’ career path. Russia adopted the Russian Index of Science Citation (RISC) (Moskaleva et al., 2018) to provide quantitative indicators that allow scientific publications to be ranked and compared to increase global visibility. While RISC itself is not a formal journal tiered system, it is widely accepted as a journal ranking in the Russian academic system due to its citation-based indicators. The second example is China, where the Chinese Academy of Sciences (CAS) has developed a separate journal evaluation system that deviates from the typical wisdom of the global quartile system (Jin & Wang, 1999). The pyramidal tiered distribution system was designed to ensure the ranking system truly reflects the quality of the outputs and aligns with the Chinese government research priorities. For example, in this policy-driven system, the redistribution system assigns only 5% of the journal into the highest tier, thus creating a sharp hierarchy in the journal ranking. The third case is Brazil, with its Qualis system (Jaffé, 2020) which categorizes the academic journals into eight tiers by using a set of criteria unique to the country. Originally designed for assessing graduate programs in Brazil, Qualis has now become the dominant framework to assign journal ranking in the country. These three national-level bibliometric examples highlight that SINTA is not an isolated case, but part of a broader international trend in how countries attempt to measure and incentivize research quality.
SINTA is strategically positioned in the broader context of international discussions on research evaluation based on the Leiden Manifesto (Hicks et al., 2015) and the San Francisco Declaration on Research Assessment (Cagan, 2013) frameworks. Both frameworks place a strong emphasis on the need to supplement citation indicators with more comprehensive qualitative evaluations, the responsible use of metrics, and evaluation transparency. SINTA differs in significant ways by weighing administrative compliance and scientific strength nearly equally in its scoring, even though it reflects these global discussions by aiming to offer a transparent and standardized evaluation tool at the national level. Because of its hybrid design, SINTA offers a valuable example to investigate whether such a system can continue to offer a valid and reliable indicator of the research quality. To this end, the present study applies a statistical methodology to test whether SINTA rankings correspond to meaningful quality differences, while offering a framework to assess other national-level bibliometric systems.
We collected data on journal metrics (Impact, H5-index, Citations 5 year, Citations, and SINTA accreditation ranking) of published Indonesian journals from https://web.archive.org/web/20250826134820/https://sinta.kemdiktisaintek.go.id/journals. The citation-based indicators (Impact, H5-Index, Citations 5 year, and Citations) are the only four available metrics on the journal profile that can be used for the statistical analysis. Out of the four available metrics, Impact and H5-Index are most suitable for analysing journal quality because they are not purely extensive parameters and therefore are not sensitive to distortion from outliers, such as an isolated citation spikes. Impact normalizes the total citations over the number of published articles, which allows meaningful comparison among journals of different sizes and ages. The H5-index captures a balance between productivity and influence in the five-year duration. It does increase with size, but it also requires citations per article for proper scaling (Hirsch, 2005). As of November 2024, there are 10,478 journals indexed on the platform that are clustered into ten subject areas: Religion, economy, health, science, education, agriculture, art, social, and engineering. There are 7,090 journals that have not been assigned to any specific subject areas. The remaining 3,388 journals are categorized into one or multiple subject areas. At the time of data collection, there was no official explanation as to why a large proportion of journals had not been categorized into at least one specific subject area.
Figure 1A shows the distributions of the journals across all subject areas, where education, science, and social have the greatest number of journals while agriculture, art, and religion have the fewest. We point out that a given journal may have multiple subject areas assigned to it. Out of the 3,388 unique journals, around 58% have one specific subject area, 25% have two, 12% have three, and 5% have four. We find no journal assigned to more than four subject areas.
We plot the distribution of the journal ranking across all subject areas in Figure 1B. There are several key observations from the graph: First, the highest-ranked journals S1 comprise less than 10% of the journal population in each subject area, while the least-ranked journals S6 make up less than 5%. Second, the percentage of S2 journals is consistently higher than that of S5 journals across all subject areas, while the percentage of S3 journals is consistently lower that of S4 journals. Third, the percentage ranking distribution exhibits a double peak feature, centred around S2 and S4, except for education, which peaks around S3 and engineering having a broad peak centred around S2, S3, and S4. The statistics show a clear deviation from the normal distribution, centred around the median ranking S3 – S4, which may indicate inconsistencies in the SINTA ranking system.
A.
B. 
Figure 1. (A) The number of journals which are clustered into one or multiple subject areas: Religion, economy, humanities, health, science, education, agriculture, art, social, and engineering. The numbers presented here were extracted from SINTA as of November 2024. (B) The distribution of journals by ranking S1 to S6, with S1 being the highest, shows the ranking proportion across all subject areas.
Figure 2 presents the box plots of H5-index and Impact distributions, two citation-based indicators extracted from SINTA (the Science and Technology Index) rankings S1 to S6 for all subject areas https://web.archive.org/web/20250826134820/https://sinta.kemdiktisaintek.go.id/journals. The plots represent the statistical spread of H5-index and Impact distributions for each ranking which serves as the starting point to assess how well the rankings align to established citation-based indicators in differentiating journal quality. Let us discuss key features of the box plot as follows:
Median
H5-index distributions. The median generally shows a decrease as one moves from S1 (highest) to S6 (lowest) for all subject areas. This agrees with the expectation that higher-ranked journals, say S1, should have a higher median for H5-index distribution than lower-ranked journals, say S3. However, while the trend aligns with the expectation, the decrease of the medians are not significant. Additionally, we do observe deviations from the general trend, for example in the subject area religion, the median of S6 is higher than that of S4 and S5.
Impact distributions. The downward trend of the median as one goes from S1 to S6 is much less obvious as compared to the downward trend in the H5-index distributions. The medians of S1 to S6 seem to randomly fluctuate with no clear trend. For example, in the subject area economy, one cannot say with confidence how the median for each ranking differs from one another. This pattern indicates that SINTA ranking system struggles to show a clear separation on the medians of the Impact distributions among the rankings.
Inter Quartile Range (IQR)
H5-index distributions. We generally observed large IQRs across all SINTA for all subject areas. Some exceptions are apparent in S6 for subject areas education, humanities, and science where the IQRs are rather narrow. The large IQRs indicate large variation in the H5-index scores within the same group. The second observation is that the IQRs, which comprises 50% of the data points within a given ranking, exhibits large overlaps between consecutive rankings. The strong overlaps in the IQRs raise concerns about whether SINTA rankings can effectively differentiate journal quality.
Impact distributions. The large IQRs and IQR overlaps between consecutive rankings are much more pronounced when we analyze the Impact score distribution. This may suggest that there is no clear statistical separation in the Impact scores for each SINTA ranking, whether that be the consecutive rankings or the non-adjacent ones.
Whiskers and outliers
H5-index distributions. In general, there are long and asymmetrical whiskers observed in each ranking. It is an indication that the H5-index scores have high variability and are highly skewed in each group. There is a pattern in the data that is worth mentioning: S5 religion, S5 education, S5 and S6 economy, S6 agriculture, S4 and S5 humanities, S5 social, S4 and S5 science, and S5 engineering have zero H5-index scores that show up in the whisker plot. This suggests that some journals in these rankings receive no citations over the measured period. We also observe outliers, such as journals with lower SINTA ranking have higher H5-index scores as compared to those with higher SINTA ranking. For example, in the subject area social, there are S3 journals with H5-index scores higher than those in S1 and S2 journals.
Impact distributions. The long and asymmetrical whiskers, zero citations, and prominent outliers are also present and amplified (compared to H5-index distributions) when we look into the Impact score distributions across SINTA rankings. The majority of the plots only show upper whisker bounds, indicating highly skewed Impact distributions. The absence of citations across SINTA rankings, apparent from the lower quartile Q1 being zero for example in subject areas education, agriculture and art, are alarming. This means that a large proportion of journals in these rankings does no citations at all. Large portions of journals also exhibit high scores in Impact although they are ranked lower according to the SINTA metric system.
The box plot hints that the statistics of H5-index and Impact scores may not fully align with the six SINTA ranking classifications. Analysis on median values, IQR, as well as whiskers and outliers suggests that journals may be systemically misclassified within the SINTA ranking framework. However, the box plot analysis cannot resolve this issue. To obtain a comprehensive understanding on the situation, we will follow statistical approaches so see how well SINTA ranking differentiates journal quality. Figure 2 shows The box plots for (A) H5-index and (B) Impact distributions for each SINTA ranking across all subject areas.
A H5 index - religion H5 index - economy
H5 index - humanities H5 index - health
H5 index - science H5 index - education
H5 index - agriculture H5 index - art
H5 index - social H5 index - engineering
B Impact - religion Impact - economy
Impact - humanities Impact - health
Impact - science Impact - education
Impact - agriculture Impact - art
Impact - social Impact - engineering
Figure 2. The box plots for (A) H5-index and (B) Impact distributions for each SINTA ranking across all subject areas.
The citation metrics H5-index and Impact measure to what extent a journal has influence in the academic community. A journal with high H5-index or Impact is supposed to contain a substantial amount of articles which are widely cited by researchers in the field. While it is understood that H5-index and Impact measure the performance of a journal from a different perspective, it is expected that there should be a strong positive correlation between these two metrics. We looked for evidence for correlation between the distribution of H5-index and Impact by calculating the Spearman rank correlation rs for each SINTA (the Science and Technology Index) ranking across all subject areas. There was no assumption made on the type of distribution for H5-index and Impact in the analysis. Table 2 presents Spearman’s rank correlation between H5-index and Impact for each SINTA ranking across all subject areas. While there are various guidelines to interpret the correlation strength (Akoglu, 2018; Schober et al., 2018), it is generally accepted that |rs| ≤ 0.4 denotes weak correlation, 0.4 < |rs| ≤ 0.7 moderate correlation, and 0.7 < |rs| ≤ 1 strong correlation between variables. We represent the strength of correlation, at significance level α = 0.05, with colour-coding in Table 2. Blue to white describe weak correlations while red shades indicate moderate to strong correlation. The stronger the correlation, the darker the red appears. The colour-coding in Table 2 allows quick identification on the correlation strength between H5-index and Impact within each SINTA rankings across all subject areas.
We find that H5-index and Impact are positively correlated with varying strength. However, we observe two exceptions. First, engineering and health S6 have rs ≈ 0, indicating no correlation between the two variables. Second, science S6 exhibit negative weak correlation which implies that H5-index increases as Impact tends to decrease and vice versa. A preliminary observation on S1 across all subjects (first column in Table 2) reveals that six subject areas (religion, humanities, science, art, social, engineering) exhibit moderate positive correlations 0.4 < rs ≤ 0.7 while the remaining subject areas have rs ≤ 0.4, indicating weak correlations between the variables. We believe that H5-index distributions statistically align with those of Impact for S1 across all subjects.
In contrast, the correlation strength map on S2 – S5 across all subjects shows that weak correlations rs ≤ 0.4 between H5-index and Impact are dominating the map. The subject areas humanities (0.525 in S5) and art (0.622 in S5) are the only two parts that exhibit relatively strong positive correlation between H5-index and Impact. There are a few noticeable moderate correlations (red shades with 0.4 ≤ rs ≤ 0.5) such as exhibited in Economy (0.479 in S2) and agriculture (0.471 in S3), but majority are dominated by weak correlations rs ≤ 0.4. This trend implies that H5-index has a restricted association with Impact in most subject areas which suggests that there are other factors affecting Impact score distribution in these SINTA ranks.
The last ranking S6 (last column) displays a mix of wide spectrum on the correlation strength, ranging from weak to strong correlations, with instances of no correlation or even a weak negative correlation. The subject areas economy (0.726) and art (0.915) have strong correlations, indicating a solid positive relationship between H5-index and Impact. In the other extreme, science (−0.102) exhibits weak negative correlation, which suggests an inverse relationship between H5-index and Impact. The upturn of H5-index is followed by the downturn of Impact. H5-index and Impact appear independent in health (0.018) and engineering (−0.009). Changes in H5-index do not correspond to changes in Impact.

Table 2. The Spearman rank correlations between H5-index and Impact for each ranking (S1 – S6) across all subject areas. The map is colour-coded for an easy viewing. The largest value is marked by dark red while the lowest by blue. The values in between are represented by the gradual transition from red to blue.
The large variation in the correlation strength, as indicated in Table 2 hints that H5-index and Impact scores do not align with SINTA rankings as predictors for the Indonesian journal quality. To better understand the robustness of the SINTA ranking system, thus the journal quality, we performed Kruskal-Wallis (KW) test to determine whether the H5-index and Impact distributions are statistically different across S1 – S6 rankings for each subject area. We argue that if SINTA ranking is robust, the H5-index and Impact distributions should show clear statistical differences among the ranking groups. Signs of disagreement suggest that the SINTA ranking system omits important metrics in the assessment rubric or it incorporates irrelevant metrics in addressing the journal quality.
Table 3 reports the calculated χ2 for H5-index and Impact distributions and the associated p−values at α = 0.05 significance level for all subject areas. The number of data N for each subject are is also displayed in the table. For H5-index, the results indicate that there is at least one ranking which exhibits statistically different distribution from the other rankings for all subject areas. Impact, on the other hand, shares a different overall behaviour. The distribution of all six rankings (S1 – S6) are indistinguishable in four out of ten subject areas: Economy, humanities, agriculture, and art (p−values are highlighted in red). In the remaining six subject areas, we see that there is one ranking whose Impact distribution is statistically different from the others.
| Subject Area | N | df | H5-index | Impact | ||
|---|---|---|---|---|---|---|
| χ2 | p | χ2 | p | |||
| Religion | 223 | 5 | 73.96 | < 0.001 | 21.96 | 0.001 |
| Economy | 481 | 108.1 | < 0.001 | 8.19 | 0.146 | |
| Humanities | 418 | 103.3 | < 0.001 | 7.54 | 0.186 | |
| Health | 449 | 87.61 | < 0.001 | 11.15 | 0.048 | |
| Science | 959 | 152.6 | < 0.001 | 27.14 | < 0.001 | |
| Education | 1,354 | 327.5 | < 0.001 | 24.49 | < 0.001 | |
| Agriculture | 322 | 79.79 | < 0.001 | 7.82 | 0.166 | |
| Art | 272 | 51.35 | < 0.001 | 10.38 | 0.065 | |
| Social | 917 | 209.7 | < 0.001 | 28.84 | < 0.001 | |
| Engineering | 453 | 91.72 | < 0.001 | 21.99 | 0.001 | |
Table 3. The Kruskal-Wallis (KW) test indicates that at least one SINTA ranking has a statistically different H5-index (Impact) distribution with the rest of the groups in each subject area at α = 0.05 significance level. Exceptions are displayed on the subject areas economy, humanities, agriculture, and art where the KW test concludes that the Impact distributions are indistinguishable across all groups S1 – S6 (p-value is indicated in red). Here N gives the amount of data per subject area and df (=5) is the degree of freedoms in the KW test. The calculated χ2 for each subject area is also listed in the table.
The KW test concludes that among the S1 – S6 rankings, there is at least one ranking group whose H5-index or Impact distributions are statistically different from the rest. Exceptions are displayed in subject areas economy, humanities, agriculture, and art where the Impact distributions of S1 – S6 rankings are statistically indistinguishable. It is important to point out that the KW test is a global analysis of the distributions across ranking groups. It does not determine which rankings are different or how many ranking groups are distinct in a subject area. To address this, we performed Dunn’s pairwise test to identify specific ranking group differences in a subject area. Since there are six rankings S1 – S6 for a subject area, there are $\begin{pmatrix} 6 \\ 2 \end{pmatrix} = 15\ $possible SINTA ranking pairs [S1 – S2, S1 – S3, S1 – S4, S1 – S5, S1 – S6, S2 – S3, S2 – S4, S2 – S5, S2 – S6, S3 – S4, S3 – S5, S3 – S6, S4 – S5, S4 – S6, S5 – S6] to determine the pairs that have significant statistical differences in the H5-Index and Impact distributions.
Figure 3A reports the Dunn’s pairwise test for the H5-index distributions across all fifteen possible SINTA ranking pairs for each subject area. Blue boxes visually represent the pairs whose H5-index distributions are statistically distinct α = 0.05 siginificance level, while white boxes denote the pairs with no statistical difference in the H5-index distributions at the same significance level. Ideally, six SINTA rankings S1 – S6 would correspond to six distinct statistical groups based on the H5-index distributions, however our findings indicate a different outcome. First, there are pairs which have indistinguishable H5-index distributions in each subject area which suggest there are less than six independent statistical groups. Each subject area also shows a unique structure on which pairs and the number of pairs that are statistically different. For example, when comparing the subject areas religion and economy, we clearly see pattern variation on the ranking pairs having statistical difference in the H5-index distributions. The large variability in the patterns highlights the complexity of SINTA scoring system to rank the journals in a given subject area. Second, from Figure 3A, there are two or more rankings sharing similar H5-index distributions, suggesting that these rankings belong to the same statistical group. Moreover, the rankings sharing similarities in the distributions can be non-adjacent. Third, it is observed that a particular SINTA ranking can simultaneously belong to different statistically distinguishable groups. These findings suggest a systematic mismatch between SINTA rankings, such as S1 – S6, and bibliometric indicators, H5-index in this case, which challenges the validity of SINTA metrics.
Let us illustrate these key findings in the analysis, particularly the second and third points, using the subject area economy in Figure 3A. We observe that S1 belongs to the same H5-index distributions as S2, S3, and S4, but is statistically different from S5 and S6. In contrast, the H5-index distribution of S2 differs with those of S3, S4, S5, and S6. Thus, S1 and S2 belong to the same statistical group while S1, S3, and S4 form another. While S1 and S3 belong to the same group, we note that they are not adjacent rankings. We also see that S1 belongs to two distinct statistical groups: One clustered with S2, and the second one with S3 and S4.
S3 shares similar H5-index distribution with that of S4 but it is different from those of S5 and S6. While S4 cannot be statistically distinguished from S5, it is distinct from S6. Lastly, the rankings S5 and S6 share identical H5-index distributions. This results in S4 and S5 belonging to the same statistical group as S5 and S6. The ranking S5 provides another highlight that a given ranking is shared by two distinct statistical groups, one clustered with S4 and another with S6.
Figure 3B displays the Dunn’s pairwise test for the Impact distributions across all fifteen possible SINTA ranking pairs for each subject area. As before, blue (white) boxes visually represent the pairs whose Impact distributions are statistically different (similar) at α = 0.05 significance level. The pairwise analysis of the Impact distributions share similar outcomes with those detailed in the previous paragraphs on H5-index. First, there are ranking pairs that have indistinguishable H5-index distributions, suggesting fewer than six independent statistical groups. Secondly, multiple rankings often share similar distributions, including non-adjacent ones, indicating they belong to the same statistical group. Thirdly, a single ranking can simultaneously belong to different statistically distinct groups. There is one additional finding in the Impact distribution: Subject areas economy, humanities, agriculture, and art have indistinguishable Impact distributions across all rankings, S1 – S6, which implies that there is only one independent statistical group in these subject areas. In other words, the rankings S1 – S6 are not valid in these four subject areas (economy, humanities, agriculture, and art).
Figure 3A and Figure 3B present the grouping analysis of the SINTA rankings (S1 – S6) using Dunn’s pairwise test on the H5-index and Impact distributions. A standard and compact way to summarize the grouping is the letter display representation, as shown in Figure 3C. A letter represents an independent group. SINTA rankings that share the same letters are statistically similar, belonging to the same statistical group. Those labelled with different letters, say a and b, have statistically different distributions, implying that they are in distinct statistical groups.
Let us look at the labelling process that involves assigning letters to the similar and different statistical groups. We investigate the case for the H5-index distribution in the subject area economy (Figure 3A) that we have discussed in detail in the previous section:
Statistical grouping recap:
S1, S2, S3, and S4 belong to the same statistical group, however S1 is different from S5 and S6;
S2 is distinct from the rest of the rankings;
S3 belongs to a different statistical group to S5 and S6;
S4 is the same as S5, but S4 is different from S6;
S5 is the same as S6. Label assignment:
Labels for S1 – S4: S1 and S2 belong to the same statistical group which suggests they share a letter. Since S2 is different from the rest of the rankings, it needs a unique label. We assign a dual letter ab to S1 and a to S2. The rankings S3 and S4 are in the same statistical group as S1 but not S2, so S3 and S4 are assigned b;
Labels for S4 and S5 are statistically similar, so they share a common label. The label cannot be b since this letter has been assigned to S3 and it is known that S3 is distinct from S5. We assign a dual letter bc to S4 to reflect its similarity to S3 (shared b) and c to S5 to show its correlation with S5;
Label for S5 and S6 are in the same statistical group, so they share a common label. This label cannot be c or other used letters (a or b) S6 is distinct from the rest except S5. A new letter d is assigned to S6 to reflect this and S5 now takes up a dual letter cd to incorporate the similarity with both S4 and S6.
Summary
S1 is assigned ab, S2 a, S3 b, S4 bc, S5 cd, S6 d;
The letter display satisfies the grouping condition as given by the Dunn’s pairwise test.
We performed thorough analysis on the Dunn’s pairwise test (Figure 3A and Figure 3B) for each subject area and converted the result into the letter display representation. Figure 3C shows the complete letter display representation for all subjects under study.
A
B
C
Figure 3. (A) Dunn’s pairwise test identifies SINTA ranking pairs whose H5-index distributions belong to the same statistical groups (blue boxes) at α = 0.05 significance level. (B) Dunn's pairwise test on Impact distributions (excluding the subject areas economy, humanities, agriculture, and art which fail the KW test) highlights significant statistical differences in the pairs marked with blue boxes. (C) Based on Dunn’s pairwise analysis in (A) and (B), letter displays summarize the grouping where same small letters represent SINTA rankings which belong to the same statistical groups.
While the letter display captures the complete picture of the SINTA ranking grouping, a colour-code diagram, shown in Figure 4, offers a better visualization of the grouping topology. A colour represents a group of two or more SINTA rankings with indistinguishable H5-index or Impact distributions. Let us revisit the letter display result for H5-index distribution for the subject area economy, as shown in Figure 4, to demonstrate how it works. We write the ranking and the associated group in the parentheses for an easy viewing: S1 (ab), S2 (a), S3 (b), S4 (bc), S5 (cd), S6 (d). The colour-code diagram gives the grouping as follows:
S1 and S2 share the same colour (green) which reflects the statistical similarity between these two;
S1, S3, and S4 share the colour pink, indicating that they belong to the same statistical group. Note that green and pink overlap at S1, a reminiscence of the dual letter representation;
S4 and S5 share the colour blue, highlighting their statistical similarity. The overlap of pink and blue at S1 is a signature of dual letter representation;
S5 and S6 share the colour yellow which emphasizes that S5 and S6 belong to the same statistical group. At S6, blue and yellow overlap which corresponds to dual letter in the letter display.
Figure 4 shows the complete colour-code diagram on the grouping of the SINTA rankings based on the H5-index and Impact distribution metrics. A careful examination of Figure 4 reveals important findings. First, we discover, by the colours green, pink, blue, and yellow, that there are at most four statistically distinct groups of rankings, instead of six independent rankings (S1 – S6) as suggested by SINTA metrics. This finding suggests that the SINTA rankings, when analysed through the H5-index and Impact distributions, do not fully differentiate among all six rankings. Table 4 showcases the number of distinct statistical groups for each subject areas based on Dunn’s pairwise test. For H5-index, we see that there are four, out of ten, subject areas that exhibit four distinct statistical groups, five subject areas show three groups and one subject area has two groups. The situation becomes more concerning if we assess the SINTA rankings based on the Impact distributions: Only one subject area achieves four distinct statistical groups, four subject areas form three groups, one subject area has two groups, and four subject areas collapsed into a single indistinguishable group. These results highlight potential inconsistencies in the SINTA ranking system when compared with the widely accepted bibliometric indicators such as H5-index and Impact distributions. This further raises questions about its validity to accurately assess journal performance across subject areas.
| Distinct statistical groups | H5-Index | Impact |
|---|---|---|
| 4 | Economy, Science, Art, Social | Social |
| 3 | Religion, Humanities, Health, Education, Engineering | Religion, Health, Science, Education, |
| 2 | Agriculture | Engineering |
| 1 | - | Economy, Humanities, Agriculture, Art |
Table 4. The summary for the number of distinct statistical groups associated with the subject areas under study based on Dunn’s pairwise test.
The second finding that can be easily visualized through the colour-coded representation (Figure 4) is the complex grouping structures of the SINTA rankings. We identify three statistical group patterns, whether that be in the H5-index or Impact distributions space. The first group is a simple grouping of two or more adjacent SINTA rankings. For example, the subject area education (H5-index distribution) has three distinct statistical groups formed by (S1, S2, S3), S4, and (S5, S6). Rankings in parentheses denote those that belong to the same statistical groups. The second group pattern involves overlaps when a given SINTA ranking belongs to two or more adjacent statistically distinguishable groups. This pattern is evident in, for example, the subject area social (Impact distribution). We observe there are four distinct statistical groups formed by S1, (S2, S3, S4), (S3, S4, S5), and (S5, S6). Clearly, S3 and S4 belong to two distinct statistical groups (red and blue) while S5 belongs to two distinct groups denoted by blue and yellow. It is worth pointing out that these overlaps occur only for adjacent SINTA rankings. The third group pattern involves overlaps of non-adjacent rankings, creating a complex group structure. Let us have a look at the subject area health for the Impact distribution. There are three distinct statistical groups that emerge: (S1, S3), (S1, S4, S6), and (S2, S3, S4, and S6). Here S1 belongs to two distinct statistical groups, green and blue, and is grouped with S3, a non-adjacent ranking, as well as with S4, S5, and S6, all also non-adjacent rankings. Similarly, S3 belongs to two different statistical groups: green and red, while S4 and S6 belongs to red and blue. The intricate patterns strongly suggest the potential unreliability of the SINTA metric system in providing accurate assessment of journal performance.
| H5-Index | Impact | |
|---|---|---|
Religion Economy Humanities Health Science Education Agriculture Art Social Engineering |
![]() |
![]() |
Figure 4. Letter display in Figure 3C is translated into a colour-code diagram to visualize the grouping of SINTA rankings per subject area. SINTA rankings with the same colour belong to the same statistical group. Analysis on H5-index and Impact distributions indicates that there are at most four distinct SINTA rankings to characterize the Indonesian journals. The colour-coded diagram highlights non-adjacent grouping and overlaps among rankings.
It is widely accepted that the academic strength of a journal is traditionally measured by citation-based indicators, such as the average number of citations received by papers in that journal over a specified period. These indicators provide a robust evaluation for journal ranking because they rely on statistical aggregation over time which smooths out short-term fluctuations and biases and captures long-term impact. For a given subject area, those journals are ranked into quartiles (Q1 to Q4) based on their citation scores, with Q1 being of the top 25% of journals that receive the highest number of citations.
Within the higher education landscape in Indonesia, the Ministry of Education and Culture, followed a different approach when it comes to assessing the journal quality. The Ministry designed a new ranking system, SINTA, which does not solely rely on citation-based indicators but also includes journal administrative measures. For a given subject area, SINTA categorizes journals into six tiers, from S1 (the highest) to S6 (the lowest). If SINTA rankings accurately portrayed journal quality, each SINTA ranking should display distinct H5-index and Impact distributions. In other words, one expects six clearly distinguishable groups, one for each SINTA ranking, when examining these citation-based indicators.
Our analysis reveals that SINTA ranking S1 – S6 does not translate into six distinguishable groups formed by H5-index and Impact distributions, an indication on striking inconsistencies in SINTA key metrics. As illustrated in the colour-coded diagram in Figure 4, we found that there are at most four distinct groups (four colours) formed by H5-index and Impact distributions for all subject areas. The situation is even more extreme if we look into the Impact distribution space. For example, in the subject area economy, all journals ranked from S1 (highest) to S6 (lowest) exhibit statistically indistinguishable Impact distribution by the Kruskal-Wallis test, effectively forming one group. This indicates that the hierarchical quality through S1 – S6 lose its meaning altogether when examined through the Impact distributions. By this analysis one must call into question the ability of SINTA ranking system to reliably reflect the journal quality.
The formation of non-adjacent groupings of SINTA ranking, as depicted in the colour-coded diagram Figure 4, further complicates the analysis. For example, Dunn’s pairwise test on H5-index distribution in the subject area economy indicates that S1 journals have similar H5-index distribution with S3 journals. How does one reconcile this contradiction, two groups have different rankings, thus distinct quality levels, but share similar a citation-based indicator, such as H5-index distribution? The seemingly systemic contradiction begs for re-evaluation on the validity of the SINTA key metrics.
The SINTA ranking system aims to provide an assessment framework to measure the Indonesian journal performance and to categorize them into six levels S1 – S6, with S1 being the highest. We found the SINTA ranking system fails to differentiate six distinguishable groups based on H5-index and Impact distributions across all subject areas. We found that there is considerable overlap in the rankings which raises fundamental questions about SINTA’s framework in general. The current methodology may overlook key metrics or incorporate irrelevant ones, resulting in rankings that lack precision and clarity, which can undermine reliability and validity of the journal assessment in Indonesia. Future journal assessments in Indonesia should adopt a more evidence-based framework aligned with global standards to ensure reliability and validity.
We point out that while the dataset analyzed in this work is specific to the Indonesian journal ranking system, the statistical framework employed transcends the local context. The non-parametric clustering tests, Kruskal–Wallis followed by Dunn’s post-hoc, test whether SINTA rankings correspond to empirically distinguishable citation-based quality groups, thus evaluating the validity of SINTA. It should be noted that Impact and H5-index may be influenced by language bias where journals published in English may benefit from higher citations than those published in local languages (Di Bitetti & Ferreras, 2017). Nevertheless, in this study, they are used not to benchmark Indonesian journals against global standards, but to test the internal consistency between SINTA rankings and globally-accepted citation-based metrics. The statistical framework is not limited to SINTA, it can equally be applied to local metric systems including the Russian Index of Science Citation (RISC), the Chinese Academy of Sciences (CAS) journal ranking, and Brazil’s Qualis classification. Thus, the study contributes a robust methodology that goes beyond the Indonesian context. It applies to much broader context for international scholars and policy makers to assess and validate other bibliometric systems.
Eddy Yusuf is an Assistant Professor in School of Information and Technology, Universitas Ciputra Surabaya. He received his Ph.D. in Physics from Florida State University and his research interests include strongly correlated electron systems, physics education, and bibliometric analysis. He can be contacted at yusuf.eddy@fulbrightmail.org
Akoglu, H. (2018). User's guide to correlation coefficients. Turkish Journal of Emergency Medicine, 18, 91–93. https://doi.org/10.1016/j.tjem.2018.08.001
Akreditasi Jurnal Ilmiah. (2018). Permen Ristekdikti no 9 Tahun 2018. https://peraturan.bpk.go.id/Details/140402/permen-ristekdikti-no-9-tahun-2018 (Archived at https://web.archive.org/web/20250909083959/https://peraturan.bpk.go.id/Details/140402/permen-ristekdikti-no-9-tahun-2018)
Alonso, S., Cabrerizo, F. J., Herrera-Viedma, E., & Herrera, F. (2009). h-Index: A review focused in its variants, computation and standardization for different scientific fields. Journal of Informetrics, 3(4), 273–289. https://doi.org/10.1016/j.joi.2009.04.001
Antara News. (2017). Menristek luncurkan SINTA, portal kinerja peneliti. https://www.antaranews.com/berita/609670/menristek-luncurkan-sinta-portal-kinerja-peneliti (Archived at https://web.archive.org/web/20250614060223/https://www.antaranews.com/berita/609670/menristek-luncurkan-sinta-portal-kinerja-peneliti)
Aprile, K. T., Ellem, P., & Lole, L. (2021). Publish, perish, or pursue? Early career academics’ perspectives on demands for research productivity in regional universities. Higher Education Research & Development, 40(6), 1131–1145. https://doi.org/10.1080/07294360.2020.1804334
Asan, A., & Aslan, A. (2020). Quartile scores of scientific journals: Meaning, importance and usage. Acta Medica Alanya, 4(1), 102–108. https://doi.org/10.30565/medalanya.653661
Badan Strategi Kebijakan Dalam Negeri. (2017). Kemenristek Luncurkan Aplikasi SINTA 2.0. https://bskdn.kemendagri.go.id/website/kemenristek-luncurkan-aplikasi-sinta-2-0/ (Archived at https://web.archive.org/web/20250910235240/https://bskdn.kemendagri.go.id/website/kemenristek-luncurkan-aplikasi-sinta-2-0/)
Cagan, R. (2013). The San Francisco declaration on research assessment. DMM Disease Models and Mechanisms, 6(4), 869–870. https://doi.org/10.1242/dmm.012955
CiteScore, S. (n.d.). Elsevier. Retrieved February 9, 2025, from https://www.elsevier.com/products/scopus/metrics/citescore (Archived at https://web.archive.org/web/20250828045418/https://www.elsevier.com/products/scopus/metrics/citescore)
Di Bitetti, M. S., & Ferreras, J. A. (2017). Publish (in English) or perish: The effect on citation rate of using languages other than English in scientific publications. Ambio, 46(1), 121–127. https://doi.org/10.1007/s13280-016-0820-7
Fiala, D. (2022). Indonesia’s place in the research landscape of Southeast Asia. In Unisia (pp. 45–66). https://doi.org/10.20885/unisia.vol40.iss1.art3
Fry, C. V., Lynham, J., & Tran, S. (2023). Ranking researchers: Evidence from Indonesia. Research Policy, 52(5), 104753. https://doi.org/10.1016/j.respol.2023.104753
Galbraith, Q., Carlie Butterfield, A., & Cardon, C. (2023). Judging journals: How Impact Factor and other metrics differ across disciplines. College & Research Libraries, 84(6). https://doi.org/10.5860/crl.84.6.888
Garfield, E. (1999). Journal impact factor: A brief review. CMAJ. Canadian Medical Association Journal, 161(8), 979–980. https://pmc.ncbi.nlm.nih.gov/articles/instance/1230709/pdf/cmaj_161_8_979.pdf (Archived at https://web.archive.org/web/20250204171302/https://pmc.ncbi.nlm.nih.gov/articles/instance/1230709/pdf/cmaj_161_8_979.pdf)
Garfield, E. (2006). The history and meaning of the journal Impact Factor. JAMA, 295(1), 90–93. https://doi.org/10.1001/jama.295.1.90
González-Pereira, B., Guerrero-Bote, V. P., & Moya-Anegón, F. (2010). A new approach to the metric of journals scientific prestige: The SJR indicator. Journal of Informetrics, 4(3), 379–391. https://doi.org/10.1016/j.joi.2010.03.002
Hanson, M. A., Barreiro, P. G., Crosetto, P., & Brockington, D. (2024). The strain on scientific publishing. Quantitative Science Studies, 5(4), 823–843. https://doi.org/10.1162/qss_a_00327
Heffernan, T. (2021). Academic networks and career trajectory: ‘There’s no career in academia without networks.’ Higher Education Research & Development, 40(5), 981–994. https://doi.org/10.1080/07294360.2020.1799948
Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520(7548), 429–431. https://doi.org/10.1038/520429a
Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102(46), 16569–16572. https://doi.org/10.1073/pnas.0507655102
Jaffé, R. (2020). Qualis: The journal ranking system undermining the impact of Brazilian science. Anais Da Academia Brasileira de Ciencias, 92(3), 1–13. https://doi.org/10.1590/0001-3765202020201116
Jin, B., & Wang, S. (1999). Division of SCI journal grading regions and distributions of Chinese papers. Scientific Research Management, 02, 2–8. https://doi.org/10.19571/j.cnki.1000-2995.1999.02.001
Lukman, L., Dimyati, M., Rianto, Y., Subroto, I. M. I., Sutikno, T., Hidayat, D. S., Nadhiroh, I. M., Stiawan, D., Haviana, S. F. C., Heryanto, A., & Yuliansyah, H. (2018). Proposal of the S-score for measuring the performance of researchers, institutions, and journals in Indonesia. Science Editing, 5(2), 135–141. https://doi.org/10.6087/KCSE.138
Moed, H. F. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4(3), 265–277. https://doi.org/10.1016/j.joi.2010.01.002
Moskaleva, O., Pislyakov, V., Sterligov, I., Akoev, M., & Shabanova, S. (2018). Russian Index of Science Citation: Overview and review. Scientometrics, 116(1), 449–462. https://doi.org/10.1007/s11192-018-2758-y
Pedoman Akreditasi Jurnal Ilmiah. (2021). Surat Keputusan Dirjen Dikti No 134/E/KPT/2021. https://arjuna.kemdikbud.go.id/ (Archived at https://web.archive.org/web/20250909104116/https://arjuna.kemdikbud.go.id/)
Robinson, P. G. (2024). How to get your work published. Community Dental Health, 41(3), 154–157. https://www.cdhjournal.org/issues/41-3-september-2024/1373-editorial-how-to-get-your-work-published (Archived at https://web.archive.org/web/20250417224207/https://www.cdhjournal.org/issues/41-3-september-2024/1373-editorial-how-to-get-your-work-published)
Sánchez-García, E., Martínez-Falcó, J., Seva-Larrosa, P., & Marco-Lajara, B. (2024). Delving into the analysis of scientific production and communication in academic literature. Journal of Librarianship and Information Science, 57(2), 433-449. https://doi.org/10.1177/09610006231223168
Schober, P., Boer, C., & Schwarte, L. A. (2018). Correlation coefficients: Appropriate use and interpretation. Anesthesia & Analgesia, 126(5), 1763–1768. https://doi.org/10.1213/ANE.0000000000002864
Szomszor, M., Adams, J., Fry, R., Gebert, C., Pendlebury, D. A., Potter, R. W. K., & Rogers, G. (2020). Interpreting bibliometric data. Frontiers in Research Metrics and Analytics, 5, 1–20. https://doi.org/10.3389/frma.2020.628703
Teixeira da Silva, J. A. (2020). CiteScore: Advances, evolution, applications, and limitations. Publishing Research Quarterly, 36(3), 459–468. https://doi.org/10.1007/s12109-020-09736-y
Walters, W. H. (2017). Citation-based journal rankings: Key questions, metrics, and data sources. IEEE Access, 5, 22036–22053. https://doi.org/10.1109/ACCESS.2017.2761400
Waltman, L. (2016). A review of the literature on citation impact indicators. Journal of Informetrics, 10(2), 365–391. https://doi.org/10.1016/j.joi.2016.02.007
Authors contributing to Information Research agree to publish their articles under a Creative Commons CC BY-NC 4.0 license, which gives third parties the right to copy and redistribute the material in any medium or format. It also gives third parties the right to remix, transform and build upon the material for any purpose, except commercial, on the condition that clear acknowledgment is given to the author(s) of the work, that a link to the license is provided and that it is made clear if changes have been made to the work. This must be done in a reasonable manner, and must not imply that the licensor endorses the use of the work by third parties. The author(s) retain copyright to the work. You can also read more at: https://publicera.kb.se/ir/openaccess