The University Ranking by Academic Performance, abbreviated as URAP, was developed in the Informatics Institute of Middle East Technical University. Since 2010, it has been publishing annual national and global college and university rankings for top 2000 institutions. The scientometrics measurement of URAP is based on data obtained from the Institute for Scientific Information via Web of Science and inCites. For global rankings, URAP employs indicators of research performance including the number of articles, citation, total documents, article impact total, citation impact total, and international collaboration. In addition to global rankings, URAP publishes regional rankings for universities in Turkey using additional indicators such as the number of students and faculty members obtained from Center of Measuring, Selection and Placement ÖSYM.
URAP gathers data from international bibliometric databases such as Web of Science and InCites provided by the Institute for Scientific Information. URAP uses data of 2,500 Higher Education Institutions (HEI) with highest number of articles published. The overall score of each HEI is based on its performance over several indicators. Of 2500 selected HEIs, the top 2000 are included in the rankings published by URAP. Field based rankings are performed on 23 fields based on Australia ERA.
URAP uses 6 main indicator to measure the academic performance. These indicators are number of articles, citation, total documents, article impact total, citation impact total, and international collaboration. The raw bibliometric data underlying URAP's 6 main indicators have highly skewed distribution. To address this issue, the median of the indicators have been used. The Delphi system was conducted with a group of experts to assign weighting scores to the indicators. Total score of 600 is distributed to indicators. URAP uses additional indicators for ranking universities in Turkey including the number of students and faculty members. The following table shows the indicators used for global rankings in URAP as of 2014.
|Indicator||Objective||Weight (out of 600)||Source|
|Number of Articles||Scientific Productivity||%21||InCites|
|Total Documents||Scientific Productivity||%10||Web of Science|
|Article Impact Total||Research Quality||%18||InCites|
|Citation Impact Total||Research Quality||%15||InCites|
|International Collaboration||International Acceptance||%15||InCites|
Number of articles is used as a measure of current scientific productivity which includes articles indexed by Web of Science. This indicator covers articles, reviews and notes. The weight of this indicator in the overall ranking is %21.
Citation, as an indicator in URAP ranking, is a measure of research impact. It is scored according to the total number of citations received. The weight of this indicator in the overall ranking is %21.
Total documents is the measure of sustainability and continuity of scientific productivity. The total document count covers all scholarly literature provided by the Web of Science database, including conference papers, reviews, letters, discussions, scripts, and journal articles. The weight of this indicator in the overall ranking is %10.
Article Impact Total (AIT) is a measure of scientific productivity adjusted by the ratio of institution's Citation Per Publication (CPP) to the world CPP in 23 subject areas. The ratio of the institution's CPP and the world CPP indicates whether the institution is performing above or below the world average in that field. This ratio is multiplied by the number of publications in that field and then summed across the 23 fields, as shown in the following formula:
The weight of this indicator in the overall ranking is %18.
Citation Impact Total (CIT) is a measure of research impact corrected by the institution's normalized CPP with respect to the world CPP in 23 subject areas. The ratio of the institution's CPP and the world CPP indicates whether the institution is performing above or below the world average in that field. This ratio is multiplied by the number of citations in that field and then summed across the 23 fields, as shown in the following formula:
The weight of this indicator in the overall ranking is %15.
International Collaboration is a measure of global acceptance of the institution. International collaboration data, which is based on the total number of published studies conducted in collaboration with foreign universities, is obtained from InCites. The weight of this indicator in the overall ranking is %15.
URAP covers considerably more institutions than other major ranking systems. In a section about URAP in “Where Are the Global Rankings Leading Us? An Analysis of Recent Methodological Changes and New Developments” published in the European Journal of Education it is mentioned that ”While it is less well-known than SRG, ARWU, THE, and QS, it is interesting because it published a list of 2000 universities, while the above rankings cover a maximum of 700 universities.” This is also mentioned in the “EUA report on Ranking for 2013 “ published by the European University Association. It indicates that URAP, along with SCImago ranking system, “fill an important gap in the rankings market in that their indicators measure the performance of substantially more universities, up to 2000 in the case of URAP and over 3000 in SCImago, compared to only 400 in THE, 500 in SRC ARWU, NTU ranking and CWTS Leiden, and around 700 in QS.”
URAP is mentioned as one the four ranking systems that solely measure the academic performance. The other three are Performance Ranking of Scientific Papers for World Universities , CWTS Leiden Ranking, and SCImago Institutions Rankings. URAP excludes teaching indicators, such as student quality and teaching performance, from global rankings and only covers research-oriented indicators. In the “International Benchmarking in UK higher Education” report of the Higher Education Statistics Agency, URAP is listed among the benchmarking resources for measuring academic. In the same report, URAP is categorized in the “whole university rankings” along with Times Higher Education World University Rankings (THE), QS World University Rankings, Academic Ranking of World Universities (ARWU), CHE Excellence Rankings, RatER Global University Ranking of World Universities, Webometrics Ranking of World Universities, 2010 World University Ranking, SIR World Report, CWTS Leiden Ranking, U-Multirank, European Research Ranking, Performance Ranking of Scientific Papers for World Universities, Human Resources & Labor Review (HRLR), and Professional Classification of Higher Education Institutions.
URAP is mentioned and used in several studies based on, or referring to, global rankings. In the “World University Ranking Systems: An Alternative Approach Using Partial Least Squares Path Modeling” article, published in the Journal of Higher Education Policy and Management, Urap is incorporated in the suggested model as one of the nine major worldwide university ranking systems along with ARWU, QS, Times, Webometrics, Taiwan. Leiden, SIR, and CWUR. In the same article, URAP is categorized among the ranking systems that are based solely on publication performance. The other ranking systems in the same category are Performance Ranking of Scientific Papers for World Universities , CWTS Leiden Ranking, and SCImago Institutions Rankings.
The following is a list of some of the books, peer reviewed articles, and conference proceedings that have covered URAP or have incorporated it in their models or comparisons.
Annual URAP ranking results are used by a number of listed universities to indicate their academic performance. The following is a short list of links to university pages that has mentioned URAP results either independently or in conjunction with other ranking results.
The indicators used in URAP are absolute values and size-dependent making it biased towards larger institutions. According to the “EUA report on Ranking for 2013“ published by the European University Association, URAP disregards books, excludes studies in arts and humanities areas, and under-represents social sciences. Furthermore, URAP does not employ any compensation for different publication cultures due to the lack of field-normalization of the results of bibliometric indicators. The report further states that “The results of the indicator on citation numbers in particular, as well as those on publication counts, are thus skewed towards the natural sciences and especially medicine.” It also states that excluding teaching indicators by URAP makes its focus solely on research-oriented institutions.
The “University Ranking Lists: A directory” report published by the Division for Analysis and Evaluation of the University of Gothenburg points out a problem that might arise from including more than 500 institutions in the ranking system. It states that “It [URAP] lists 2000 universities, and the purpose is to provide a ranking that covers not only institutions in the Western elite group. This purpose contrasts starkly with other ranking producers’ decisions not to publish more than the 400-500 top positions of their lists, since they do not consider their methods reliable below that level. [URAP] do not comment this problem.”
|dead-url=(help); External link in