Is AI forcing universities to rethink both skills and responsibility?

QS Midweek Brief - March 26, 2026. Is AI a revolution, evolution or something else for skills? And how can we use AI in an environmentally mindful way?

Is AI forcing universities to rethink both skills and responsibility?

Welcome! If you dig into the current skills and employability arguments, there are two seemingly distinct positions: AI will require completely new skills, or, AI will enhance the need for already existing skills. We think the answer is both.

In the latest edition of QS Insights, our cover story explores how AI and other market trends are reordering the importance of skills, maintaining a base set and adding needs. Sticking with AI, we also include a teaser for a larger piece on the environmental trade-offs of using it.

Stay insightful,
Anton John Crace
Editor in Chief, QS Insights Magazine
QS Quacquarelli Symonds


The new skills order

By Seb Murray

In brief

  • Generative AI is upending traditional degrees, shifting the value of education from simple knowledge recall to uniquely human skills like critical thinking and ethical judgement.
  • As machines automate routine tasks, demand for "oversight" capabilities is soaring. Students are increasingly pairing traditional degrees with micro-credentials to prove practical, AI-literate workforce readiness.
  • Universities must fundamentally redesign assessments and curricula. Success lies in training staff and students to verify, improve, and responsibly use machine-generated insights in a professionalised workforce.

Generative artificial intelligence has unsettled a long-standing assumption in higher education: that the ability to recall and apply knowledge has been one of the main ways graduates prove they are ready for work. 

Systems that can summarise, draft text and create code in seconds have forced universities to question the assessment tasks that machines can now do too. In 2023, a professor at the Wharton Business School, Christian Terwiesch, found that ChatGPT would have earned a B grade on his MBA operations management exam; the chatbot was strong on structure and analysis, weaker on maths.

For Rose Luckin, Emeritus Professor of Learner Centred Design at University College London, the implication is clear. AI is “increasingly capable of performing the kinds of knowledge recall and application tasks that have traditionally been at the heart of university assessment”.

The premium, she argues, is shifting towards “the elements of human intelligence that AI cannot replicate”: understanding how you think and learn (metacognition), collaborative problem solving and making ethical judgements.

That view is echoed at the policy level. Andreas Schleicher, Director for Education and Skills at the OECD, notes that while many routine and predictable cognitive tasks can now be undertaken using AI, demand for “higher-order cognitive abilities – such as critical thinking – will be sustained”.

Yet employers are already reporting gaps in skills. According to the 2026 Talent Shortage Survey by recruitment firm ManpowerGroup, AI-related skills are now the hardest to find — not specialist research expertise, but basic literacy and confidence in using the tools.

As this year’s QS World University Rankings by Subject are published, universities face a sharp question: if AI can reproduce knowledge, what exactly should a degree now prove?

If AI is putting assessment under pressure, it is also reshaping the hierarchy of skills.

The more significant shift is not what AI can produce, but what humans must now contribute. As Professor Luckin argues, these systems can generate plausible answers, but they do not understand what they produce. That distinction matters for students, who must develop “a genuine understanding of what AI actually is and how it works” — not simply how to prompt a tool, but how to recognise its limits and predictable failures.

“This is not about learning to use particular tools, which will come and go,” Professor Luckin argues. “Large language models predict plausible text rather than retrieve verified facts.” 

In practice, this shifts the emphasis towards skills that sit above subject knowledge. Professor Luckin highlights the importance of students being able to judge what they know and do not know, to regulate their own thinking and to justify decisions. The question is no longer whether a student can produce an answer, but whether they can assess its quality — especially when a machine has helped to generate it.

Data from online learning platforms suggests this shift is already under way. Marni Baker Stein, Chief Content Officer at Coursera in California, says enrolments in generative AI courses reached 15 per minute in 2025, up from eight per minute in 2024. But alongside the surge in AI courses, enrolments have also risen sharply in courses focused on things like debugging; spotting and fixing errors in code and systems. 

Also, among people working in data-related roles and learning through their employers, enrolments in critical thinking courses rose by 168 percent year-on-year, while courses in data quality and data cleansing more than doubled. The pattern suggests a growing focus on checking and correcting what AI systems produce, rather than simply generating more content.

For Schleicher at the Paris-based OECD, this pattern is somewhat predictable. As routine tasks are handed to machines, the human role becomes one of oversight: deciding what to trust, what to challenge and what to change. Schleicher tell QS Insights: “It is crucial that students develop their capacity for independent thinking and analysis and do not become blindly reliant on potentially inaccurate content generated by AI.” 

For all the warnings about jobs being lost to automation, employers are not reporting a glut of talent. They are reporting the opposite in some labour markets. 

According to a survey by recruitment firm ManpowerGroup, 73 percent of UK employers report difficulty finding the skilled workers they need. The strain is particularly acute in the automotive sector, where 92 percent of companies report shortages.

Engineering remains the hardest skill set to source, followed by manufacturing and production. Public services, health and social care are also struggling to recruit enough skilled staff, despite sustained investment in training.

Notably, AI-related skills are now the toughest to find, cited by 19 percent of companies. As Michael Stull, Managing Director of ManpowerGroup UK, puts it: “Machines and AI can help to amplify productivity and effectiveness, but only so far. People are the vital component in unlocking the full potential of AI.”

Employers are responding not by only replacing staff, but by investing in them. Upskilling and reskilling existing employees is now the most common response to shortages, cited by 33 percent of organisations in the ManpowerGroup report. By contrast, only 10 percent point to automation as a primary solution, and 9 percent to outsourcing – moving the work outside the company. All this points to a simple fact for universities and students: AI may change how work is done, but it has not removed the need for people able to do it well.

While employers report shortages, students are moving quickly to build new skills.

The growth in demand for AI-related learning has been rapid. But the shift is not limited to AI tools alone. Online learning platform Coursera’s data shows that learners continue to study the basic digital skills needed to keep existing systems running – such as SQL, JSON and web applications – even as they add new AI skills on top. Enrolment data suggests that learners understand the need to work with AI rather than simply rely on it.

Students are also changing how they think about qualifications, an important finding for universities. Micro-credentials are gaining ground. Coursera’s data suggests that 94 percent of students want micro-credentials to count towards a degree, up from 55 percent in 2023. Globally, 77 percent of learners say they are more likely to enrol in a degree programme that offers them.

Employers appear receptive as well. According to the same data, 85 percent are more likely to hire a candidate who holds a micro-credential.

For Baker Stein, these trends reflect a broader reality: “The pace at which learners and employees need to acquire skills has accelerated to unprecedented rates,” she argues, “and the way in which we deliver and verify skills needs to accelerate in similar fashion.” 

For many students, a degree on its own no longer feels like enough.

Seb Murray is a journalist and editor who writes often for the Financial Times and has written for The Times, The Guardian, The Economist, The Evening Standard and BBC Worklife. He focuses on higher education and global business. He also produces a wide range of content for a range of corporate and academic institutions. Seb is also a recognised expert on higher education and speaks at international conferences.


Ditching cognitive convenience: Adopting mindful and responsible AI in universities

By Claudia Civinini

In brief

  • As AI integrates into campuses, educators warn of its environmental footprint, arguing that high energy and water usage may clash with university sustainability goals.
  • Many users rely on AI for "cognitive convenience", simple tasks like routine emails, leading to high energy costs for low-value insights that are often hidden within cloud services.
  • Universities must lead by adopting clear usage policies, vetting green energy sources, and encouraging students to prioritise human critical thinking over unnecessary, resource-heavy machine shortcuts.

At the time of filing this feature, a manifesto for conscientious objection against AI has been signed by over 2,700 educators in France. Launched in December 2025, it has gathered signatures from lecturers, researchers and teachers across the education sector.

Three arguments support the manifesto’s stance against AI, touching on its social impact and the threat of misinformation. But the first argument against it is on an environmental basis. Using AI is incompatible with international efforts to tackle climate change such as the Paris Agreement, the manifesto says. This argument, it maintains, is enough to reject its use.

The environmental impact of AI use has become a more common topic of discussion in the sector. While data on the impact of commonly used tools is not always consistently available, the energy use and water consumption of AI cannot be ignored.

Driven by growth in AI usage, data centres are making headlines for their impact on the environment and the communities around them. And the production, maintenance, disposal and transportation of AI’s hardware components require additional energy use and consumption of natural resources, argues Dr Yuan Yao, Associate Professor at Yale University.

According to an explainer on the London School of Economics website detailing the impact of AI infrastructure and AI-powered applications on the environment, AI regulation often focuses on privacy and ethics, neglecting the environmental aspect.

The authors argue that the future of AI is a political decision: “the choice could be made to build a future in which AI benefits society without causing irreparable damage to the environment,” they write. “Democratic deliberation and government regulation will be critical to promoting this choice.”

This is a choice and a question the sector is already grappling with.

An Assistant Professor at the University of Michigan, Dr Rabab Haider, said in an interview that we need to be conscious of the way that we use AI tools, not just for data privacy but also for sustainability.

“There’s a tension in terms of trying to meet our U-M sustainability goals and decarbonisation goals while still pushing out this technology,” she said.

Roundtable discussions held by UK non-profit organisation Jisc in 2025 on AI and sustainability in higher ed also emphasised a dual benefit and risk.  While AI’s resource usage can be substantial, it has the ability to contribute positively to environmental goals, such as climate prediction, energy efficiency and carbon footprint tracking, according to a blog summarising the events.

A need for balance emerged; universities need to balance harnessing the potential of AI tools with their commitments to sustainability.

Invisible footprints and scale creep

Arif Gasilov, Partner, Sustainability Strategy & ESG Compliance at consulting company Gasilov Group, says that a lot of his work now overlaps with how organisations adopt AI. He says keeps seeing higher education roll out AI faster than the systems that track energy or emissions can keep up.

He notes several issues that can determine whether and how the environmental impact of AI tools can be managed by a university.

One issue is scale creep. An AI tool could be implemented with a specific function but then become standard across a programme. When this happens, Gasilov explains, staff and students rely on the tool, making it difficult to cut back even if the usage pattern is wasteful.

“Broad assistants that plug into everyday workflows tend to sprawl, because they can be used for almost anything. More task-shaped tools tend to have clearer boundaries and better levers for governance,” he says.

The environmental footprint of AI usage can be invisible, at least internally, since it sits in cloud services rather than on campus. If ownership or control of AI tools within a university is fragmented, he says, that environmental footprint becomes difficult to manage.

“IT can enable access, the department pays for the licensing, but no single group has full accountability, and because of that, when the responsibility is split, the teams cannot manage what they don’t control,” he says.

At the source

Dan Graf is CEO and Co-Founder of the AI-native carbon reporting platform earthchain.He says energy sources must also be vetted.

Cloud services need to be procured responsibly, for example from data centres which are powered by renewables and are not in areas affected by water shortages.

“The university could also host some [smaller] models on its own, managed infrastructure, whether that's their own data centre, or one that they contract directly… then they will have complete transparency over how that data centre is powered, where the energy sources are coming from,” he adds.[cc1] 

He says the University of Oxford, for example, has developed a Generative AI usage policy which favours running smaller models locally instead of relying on cloud services where the use of resources is opaque.

But beyond institution-wide governance, another, perhaps simpler, factor is key to mitigating AI’s environmental impact: questioning how we use AI tools and whether their use is necessary or justified.

 “There are a lot of great case studies where AI is really helping companies and universities reduce their carbon footprint,” says Dr Mark McNees, Director of Social and Sustainable Enterprises at Florida State University, Jim Moran College of Entrepreneurship.

"It’s an amazing tool if it’s leveraged for the right purposes.”

The right purposes is key here. AI has potential for supporting sustainability goals, but the devil is in the caveats, as usual.

If used carefully, AI can improve progress towards sustainability goals, particularly around measurement and data tracking, identifying hotspots and trends, as well as scenario planning, Graf says.

“But the energy impact and water impact are potentially huge, and the use of AI without guidance, policies and documented best practices will do damage. This impact is also difficult to account for and track,” he adds.

A teacher turned education journalist, Claudia has been writing about international education for the best part of 10 years. Originally from Italy, she worked and studied in Australia before moving to the UK in 2014. As a journalist, she worked for publications in the education sphere such as EL Gazette, The PIE News and Tes, specialising in research and data-led reporting, before going freelance in 2021. She holds an MSc in Educational Neuroscience and is passionate about education research.