QS MidWeek Brief - August 06, 2025

Welcome! It can be an eye-roll inducing term, but there is a grain of truth in the criticism that universities can act like ivory towers. By their very academic nature, they are exclusionary; you must meet certain requirements just to enter. So, what happens if stepping foot on campus doesn’t require people to bring anything more than just themselves?
This week, we look at a case study of what happens when a university’s gates are open to the public. We also bring you a follow-up to our piece on the zombification of university research.
Stay insightful,
Anton John Crace
Editor, QS
Creating a University Without Borders, Literally
By Karim Seghir, Chancellor, Ajman University, UAE and Member, QS Global Advisory Committee (Middle East)
The age-old adage of higher education as an Ivory Tower is being dismantled to pave the way for a future of inclusive and interconnected learning. So-called elite groups of thinkers, housed in academic silos on hilltops, are no longer regarded as virtuous or valuable. The accurate measure of an institution now depends on its ability to break down barriers between the campus and the community in pursuit of mutually beneficial outcomes.
Today, the road to knowledge is a two-way path characterised by Edupreneurship –an integrated landscape of educators, researchers, trailblazers, and changemakers, many of whom fall into more than one category. So why cling to segmented ideas, spaces and places?
To foster a genuine openness to the world beyond our walls, Ajman University is dismantling the physical barriers that hinder collaborative efforts with stakeholders. Removing these obstacles surrounding our perimeter sends a powerful message: Nothing should impede access for new generations of thinkers and new iterations of thinking.
The demolition of our exterior walls represents the start of a larger project with the Ajman Municipality & Planning Department, ultimately aimed at establishing a public pathway around our entire city block.
Once completed, our neighbours can join us for a jog, bike ride or stroll among the gardens and trees, enjoying an unobstructed view of our historic and beautiful campus. Our 360-degree perspective of the city will continuously remind us of what, and who, matters most.
Ultimately, community members will gain access to more space at AU, including the library, industry incubator, innovation hub, sports fields and more. Similarly, our students will spend more time outside the classroom, engaging in on-site, hands-on learning experiences. Every moment that stakeholders can share together boosts opportunities for meaningful social impact, our raison d’être.
Tearing down walls at AU refers to all of them: the figurative barriers that obstruct multi-disciplinary research and impede society-centred innovation, as well as the literal ones that divide and separate us from the communities we exist to serve. We prioritise open access in all its forms, intertwining people and purpose in pursuit of a better future for all.
What does this philosophy look like in practice? Every day, in every way, we strive to bring the outside in and vice versa. A prime example is our mobile dental clinic, which offers free preventive and restorative dental treatments to at-risk populations. Equipped with state-of-the-art dental technology, including dental chairs and X-ray units, the clinic aims to ensure that everyone, regardless of their circumstances, can maintain good oral health.
On the Ajman shoreline, students frequently collaborate with community volunteers on AU’s mangrove replanting project, which is one aspect of our comprehensive commitment to sustainability. These remarkable trees protect coastlines from erosion, provide essential habitats for fish and other marine life and absorb significant amounts of carbon dioxide. So far, AU students and the Ajman community have planted 3,370 young mangrove trees together.
Ultimately, the most significant measure of Ajman University’s success will be our ability to foster social change and nurture the changemakers who will drive it forward. We firmly believe that one of the best ways to build the future is by tearing down walls.
Dr Seghir is the Chancellor of Ajman University in the United Arab Emirates. Dr. Seghir has extensive and geographically diverse professional experience. Dr Seghir is a member of the Advisory Board of Harvard Business Review (HBR) Arabia. He has also served as a speaker in various regional and international events including AACSB, EFMD, PRME, Economic Research Forum and the World Bank. He is a board member of I-GAUGE, the ratings system for Indian colleges and universities.
Zombie Hunters
By Claudia Civinini
In Brief
- Predatory publishers, paper mills, and the use of AI to generate fraudulent research papers are undermining the integrity of scientific literature.
- The proliferation of AI-generated and fraudulent papers is creating a credibility crisis which is compounded by the difficulty in detecting AI-generated content, as traditional tools become outdated.
- To combat these issues, researchers have developed new tools to detect unethical papers. Fostering a sleuthing mindset among students and improving collaboration among scientists, universities, and publishers are crucial.
Professor Graham Kendall, Vice-Chancellor at MILA University Malaysia, led a secret life for a while, anonymously managing the @fake_journals account on X.
The inspiration for creating an account shedding light on the predatory publishing landscape, Professor Kendall explains, came after Jeffrey Beall retired. Beall’s famed list of predatory publishers, aptly titled the Beall’s List, was consequently closed down.
“Since Beall started his work in 2010, we haven’t really made any inroads into stopping those practices. If anything, it’s got worse,” he says.
A particular post made Professor Kendall’s account gain a lot of followers and surpass the 10,000 mark: it was about hyper-prolific authors.
While Professor Kendall concedes that, in some research fields, publishing shorter papers or a series of papers is more common, over-the-top productivity, such as publishing more than one paper a day, should be looked into, he says.
“There are certainly people using paper mills and generative AI to produce papers.”
But that is only the tip of the iceberg. Generative AI, paper mills, poor peer review and predatory publishers are all issues that, combined, will lead to the integrity of the scientific archive being harmed, according to Professor Kendall.
“If it carries on too long, we just won’t be able to rely on the scientific archive anymore, or have any faith in it,” he says.
“People would end up using papers produced with generative AI, papers that are not really good science, that haven’t been peer reviewed, but still have made their way in the scientific literature.”
Unethical publishing is an old problem, but generative AI tools are lending unprecedented speed to fraudsters, magnifying the scale of the problem. This in turn is creating a zombie academic archive, a mix of AI-generated content, human fraud, and hard-to-kill mistakes and fake science potentially living on forever in AI models.
Scientists all over the world are responding to the threat and trying to hunt down unethical publishing in a bid to safeguard the integrity of the scientific literature.
Artificially More Intelligent
Dr Ophélie Fraisier-Vannier is a postdoctoral researcher at the Institut de Recherche en Informatique de Toulouse in France. Last year, she joined the team led by Dr Guillaume Cabanac, a professor of computer science whose activity as a ‘deception sleuth’ was recognised by Nature in 2021, to conduct further work, among other topics, on fraud detection.
Dr Cabanac is the creator of the Problematic Paper Screener (PPS), which screens research for signs that it has been produced unethically. These signs include tortured phrases, sentences with awkward expressions – for example, counterfeit consciousness instead of artificial intelligence – which signal a crude use of synonyms, usually a way to disguise plagiarism. The PPS also flags other problematic papers, such as those generated with SciGen, those containing ChatGPT ‘fingerprints’, or citejacked papers – articles in legitimate journals that cite articles in hijacked journals.
“It’s a really fascinating topic because it’s at the heart of what it is to do research: if you cannot trust the scientific record, it’s a big problem. Not just for science, but for society at large,” Dr Fraisier-Vannier explains.
“We need more people flagging the problems so that the scientific record can remain trustworthy and society can rely on scientific research, trusting what comes out of labs, universities, and the like.”
Unfortunately, generative AI is also making fraud detection harder.
“There are several types of detection tools included in the PPS, but the main one revolves around the tortured phrases,” Dr Fraisier-Vannier explains.
“The problem is that ChatGPT is way too smart to generate tortured phrases. So, we know already that the tortured phrases detector is kind of outdated, because tortured phrases are not used anymore to generate articles.”
Generative AI, Dr Fraisier-Vannier explains, could just be a very efficient tool for paper mills to generate papers even more quickly. Before AI, the people behind paper mills had to copy and paste articles and avoid plagiarism detection by using synonyms liberally, which created tortured phrases. Now, that might not be needed anymore.
“Their workflow has been reduced by one step,” she says. “They just need to generate an entirely new paper.”
Thankfully, some human mistakes still make the undisclosed use of AI evident.
Papers containing leftovers from an LLM response are not uncommon. These includes phrases such as “I don’t have access to real time data”, and in some cases, text literally noting the author is an AI language model.
Catching AI-generated papers that don’t contain those tell-tale sentences requires more time-consuming analyses.
One of these analyses, Dr Fraisier-Vannier explains, can be done on citations: references that have nothing to do with the subject matter or ghost citations (made-up references), can flag up an article as suspect.
“We will have to lean more on this kind of flags rather than on keywords,” she says.
“Keywords will still catch some outliers, people who forgot to delete a sentence. But I fear we will miss the majority otherwise.”
Eerily Quiet
Things have gone quiet online for publishers, explains Simon Linacre, Chief Commercial Officer at Cabells and author of The Predator Effect: Understanding the Past, Present and Future of Deceptive Journals.
Download statistics, a valuable metric for both publishers and authors, even in the open-access model, have become less useful. Increasingly, figures won’t reflect actual usage because some will come via AI and not all be based on downloads anymore.
If people use AI to do research, they may not read any of the original papers. With a lot of the open-access papers already hoovered up by AI models, there won’t be any hits recorded on a publisher’s website.
“A lot of publishers and libraries are identifying that the download metric is in decline. Now that’s going to cause a real problem, because the libraries then have less of a valuable metric to understand what the cost per download looks like. The publishers are worrying because their traffic is dropping,” Linacre says.
Automated attacks, or bad bot activity, are another threat facing both universities and publishers.
According to cybersecurity firm Imperva, part of the Thales Group, bad bots can damage the education sector by, for example, taking over students’ and faculty accounts, and scraping proprietary research and data.
Tim Ayling, Cybersecurity Specialist at Thales, tells QS Insights Magazine that generative AI makes the development of simple bots easier, enabling even people without deep technical knowledge to launch bot attacks.
“Academic publishers own a vast amount of valuable copyrighted content, some of which can be decades old. Particularly prestigious journals, research papers and even the reputations of the academic authors and journals can hold great value, which naturally makes them a target for bad actors,” he says.
“Thanks to vulnerabilities with legacy identity and access management tools on the websites of publishers and universities, as well as limited abilities to monitor and block automated activity, this content can be at particular risk.”
Content can also be scraped in bulk from academic journals, he says, which can lead to increased operational costs for publishers from the huge increase in content requests, and threaten their financial viability.
How To Survive a Zombie Outbreak
“What do we do when there’s a zombie outbreak? We build walls, we fence ourselves off in a community,” explains Dr Jake Renzella, Senior Lecturer at the School of Computer Science and Engineering at the University of New South Wales, Australia.
In the social media sphere, platforms such as Discord, which allow people to create smaller and more private communities, are on the rise.
“That might sound a bit negative, but I think what those smaller social media platforms are saying is that people want a more intimate connection with other people who they know are real.”
The zombie internet analogy, as covered “The Zombie Scientific Archive”, fits with the online undergrowth of unethical scientific publishing: a mix of AI-generated content human fraud, and hard-to-kill fake science living on forever in AI models.
What those walls Dr Renzella discusses could mean for research and publishing is a question to be pondered.
Linacre notes that library publishing, with university libraries assuming greater control of research outputs, is a solution that has been proposed in response to previous challenges to the traditional research and publishing models.
“In the current climate, I can see that this argument can still hold weight, with universities taking advantage of low barriers of entry into publishing by mandating publications through their own platforms,” he explains.
In a 2024 article, Dr Jessamy Bagenal, Senior Executive Editor – Head of Clinical at The Lancet, proposed a series of solutions.
One part, she writes, would be to find new ways to finance the open access model, such as new types of group deals that remove the focus from article processing charges, while reforming the academic rewards system to prioritise quality over quantity and diminish the link between publication and promotion. She also advocated for more robust editorial processes scrutinising studies for signs of data fabrication.
Her paper, most importantly, is a call for editors and publishers to tackle the challenges created by generative AI. She quoted a book by AI entrepreneur Mustafa Suleyman and writer Michael Bhaskar, The Coming Wave, which warns that humanity is not ready for the impact of new technologies and introduces the concept of “pessimism aversion” – the reluctance to confront difficult change.
For journal editors and scientific publishers today, she warned, pessimism aversion is a dangerous trap to fall into.
“All the signs about generative AI in scientific publishing suggest things are not going to be ok,” she writes.
A teacher turned education journalist, Claudia has been writing about international education for the best part of 10 years. Originally from Italy, she worked and studied in Australia before moving to the UK in 2014. As a journalist, she worked for publications in the education sphere such as EL Gazette, The PIE News and Tes, specialising in research and data-led reporting, before going freelance in 2021. She holds an MSc in Educational Neuroscience and is passionate about education research.