Are University Rankings Fair? A Reflection on the “Ranking Game”

University rankings have become a dominant force in shaping perceptions of academic quality, and influencing decisions from students, governments, and funding bodies. Global rankings, such as Times Higher Education (THE), QS World University Rankings, and Academic Ranking of World Universities (ARWU), offer a comparative tool for evaluating universities across a range of criteria. However, growing concerns about the fairness and legitimacy of these ranking systems, particularly in the context of Malaysian higher education, raise important questions. Can we truly apply the same metrics to all institutions? More importantly, do these rankings measure real values and impact relevant to local needs, or are they increasingly manipulated to serve business interests, distorting the true purpose of education?

The Case of Malaysia: Public vs. Private Universities

Malaysia’s higher education system includes a mix of public and private universities, each serving different populations and missions. Public institutions like Universiti Malaya (UM), Universiti Kebangsaan Malaysia (UKM), and Universiti Putra Malaysia (UPM) consistently rank among the highest Malaysian universities. In the 2025 QS World University Rankings, Universiti Malaya holds a global position of 65th, the top spot for a Malaysian university (QS, 2025). These institutions cater to diverse socio-economic groups, often focusing on nation-building, research, and community engagement, with missions closely tied to Malaysia’s development needs.

On the other hand, private universities like Taylor’s University and Monash University Malaysia rank lower on the global scale but perform well in areas like graduate employability and international student satisfaction. Private institutions generally serve a more affluent, often international, student population. Their focus is more on global competitiveness and market-driven education, aiming to meet the demands of the international job market.

This clear distinction in the missions and populations of Malaysia’s public and private universities illustrates the challenges in applying standardised global ranking criteria across all types of institutions. The public sector’s emphasis on local and national development is difficult to measure through metrics like internationalisation and citation counts, while private universities may excel in areas that align more closely with the global market economy.

The Problem with Standardised Ranking Metrics

Global rankings apply the same set of criteria across institutions, which often doesn’t account for the diverse roles universities play. Metrics like research output, international faculty and students, and citations per paper disproportionately favour larger, research-intensive institutions in developed countries. In the Malaysian context, this is problematic for public universities, which are often tasked with local development projects and nation-building goals. Many of these universities excel in health sciences, engineering, and agriculture, focusing on local issues such as public health, infrastructure development, and sustainable agriculture. However, their contributions may not be fully reflected in rankings that prioritise global research visibility.

Private universities, in contrast, tend to perform well in categories like internationalisation and employability, as these metrics align more closely with their business models. Private institutions in Malaysia frequently form partnerships with industries, focusing on niche programmes that appeal to both local and international students. For these universities, rankings become a tool for marketing and recruitment, serving as a measure of their commercial success rather than their broader educational impact.

Manipulation of Rankings and Ethical Concerns

Globally, several cases have demonstrated how universities can manipulate data to improve their rankings. For example, in 2019, Temple University’s Fox School of Business was found guilty of inflating data to improve its position in the U.S. News & World Report rankings. This manipulation included exaggerating student admission statistics and faculty-student ratios (Douglas-Gabriel, 2018). More recently, in 2021, Columbia University was accused of submitting inaccurate data related to class sizes and faculty qualifications, leading to a significant drop in its ranking (Korn, 2022).

Additionally, dissatisfaction with rankings is not limited to data manipulation scandals. Some institutions, such as Utrecht University, have opted out of global rankings altogether. In 2023, Utrecht withdrew from the Times Higher Education World University Rankings, citing concerns about the ranking system’s emphasis on quantitative metrics and competition, which they felt did not align with their educational values (Science Business, 2023) . Similarly, a UC Berkeley study found that the business practices of some ranking agencies, such as QS, may create conflicts of interest. This study suggested that universities that frequently used QS’s paid services, such as consultancy, experienced a noticeable improvement in their rankings, raising concerns about the integrity of such rankings (CSHE, 2021) .

Commercialisation of Rankings

The commercialisation of university rankings has become a growing concern. Ranking organisations, such as QS and Times Higher Education, offer paid consultancy services to help universities improve their scores. Some institutions spend large sums of money on these services to boost their performance in areas like international collaboration, faculty diversity, or research visibility. This practice raises concerns about the objectivity of rankings and whether they reflect true educational quality or merely the financial resources of the institutions (Matthews, 2017).

In Malaysia, this is particularly relevant for private universities that use rankings as a marketing tool to attract international students. By improving their ranking performance, these institutions can justify higher tuition fees and appeal to a more global audience. However, this focus on ranking performance may come at the expense of local educational needs, raising questions about whether rankings are being used to inflate perceptions of quality rather than to reflect the true impact of the institution.

The Impact on Local Needs

One of the most significant issues with global university rankings is whether they measure the real impact of universities on their local communities. Public universities in Malaysia play a critical role in nation-building, producing graduates who contribute to vital sectors such as healthcare, engineering, and education. Their research often focuses on local issues, such as improving healthcare access in rural areas or developing sustainable agricultural practices. However, these contributions may be overlooked by global rankings that prioritise international visibility over local impact.

Private universities, while playing an important role in providing specialised, market-driven education, tend to focus more on the commercial aspects of higher education, which can lead to misalignment with local needs. As Malaysia continues to balance public service with market demands, the pressure to perform well in global rankings may distort institutional priorities, particularly when these rankings favour global recognition over regional contributions.

Conclusion

University rankings are a useful tool for evaluating institutions, but they must be used cautiously. The standardised approach to ranking public and private universities in Malaysia often fails to capture the full scope of their missions and societal roles. While public universities focus on local development and nation-building, private universities tend to pursue market-driven goals. Rankings that prioritise global visibility over local impact risk distorting the educational landscape, rewarding institutions that are skilled at navigating the ranking system rather than those that provide real value to their communities.

As educators and policymakers, it is essential to ensure that rankings do not become the sole measure of success. Instead, we must develop more inclusive metrics that reflect the true contributions universities make, both globally and locally, to ensure a fairer and more comprehensive understanding of educational quality.

References

Douglas-Gabriel, D. (2018, July 9). Temple University’s business school dean forced out amid scandal over fake U.S. News rankings data. The Washington Post. https://www.washingtonpost.com/education/2018/07/09/temple-universitys-business-school-dean-forced-out-amid-scandal-over-fake-us-news-rankings-data/

Korn, M. (2022, September 12). Columbia University drops to No. 18 in U.S. News rankings after cheating scandal. The Wall Street Journal. https://www.wsj.com/articles/columbia-university-drops-to-no-18-in-u-s-news-rankings-after-cheating-scandal-11662934553

Matthews, D. (2017, March 2). World university rankings are ‘open to manipulation’. Times Higher Education. https://www.timeshighereducation.com/news/world-university-rankings-are-open-manipulation

QS World University Rankings (2025). Available at https://www.topuniversities.com/university-rankings

Science Business. (2023, October 12). Utrecht University withdraws from global ranking as debate on quantitative metrics grows. Science Business. https://sciencebusiness.net

Center for Studies in Higher Education (CSHE). (2021). Berkeley study: major university rankings may be biased. UC Berkeley. https://cshe.berkeley.edu/news/berkeley-study-major-university-rankings-may-be-biased

Disclaimer: This document was created with the assistance of AI technology