21 August 2025

AI is game-changing when it’s human, ethical and equitable

Student at Hull College using a laptop

Anthony Painter, Director of Strategic Engagement at Ufi, wrote this article for FE Week, where it was first published. We’re pleased to share it here to continue the conversation about how AI can transform vocational education.

AI will be used in harmful ways if leadership teams don’t immerse themselves in understanding its risks and opportunities.

The decisions taken on AI by colleges and training providers in the next year will fundamentally alter the lives of learners and teachers over the next decade. Already there is enough evidence of both the enormous benefits that well-designed AI systems can offer, and the harm that poorly implemented AI can cause. If AI literacy is not treated as a priority within leadership teams then the risk of harmful AI increases.

You will notice that I haven’t explored the possibility of no AI here. Consumer accessible generative AI applications are already widely used by learners and tutors, and will continue to be so, above or under the radar. They may be used in ways that harm learning or are unethical.

There are many entirely reasonable objections to the current wave of big tech-led AI innovation: it is driven by profit rather than purpose, it replicates societal racial and gender biases, and from a sustainability perspective it has highly negative consequences on the visible horizon. So surely avoidance is the ethical choice?

The consequence of leadership teams not deeply immersing themselves in both risks and opportunities will be that AI is used in harmful ways: bypassing cognitive development, deskilling professionals, creating unfair advantages for those with AI skills, and contracting out critical thinking to technologies that have undoubted flaws.

“You are an AI organisation already – whether that’s acknowledged or not”

If you haven’t developed a systematic approach then you may already be facing the harm that unethical, inequitable and dehumanising AI can cause. Because these technologies are both readily available and widely accessed. You are an AI organisation already – whether that is acknowledged or not.

Ignoring AI may make us feel ethically better but we can shape a better future by using it in a mindful way cognisant of environmental harms, in a human way crafted to improve the knowledge and skills of learners and tutors, and in an equitable way aware of inequalities and poor representation. Some colleges and training providers are doing this now.

Teacher and students working on computers.
Travel and tourism students at Hull College using AI in their training to become cabin crew.

It is vital to look at the range of evidence when designing AI systems: to help learners develop their skills, to support tutors in designing personalised and engaging programmes of tuition whilst helping them manage their workload, and support staff in providing richer data insights and better processes. A recent MIT study shows the cognitive deficit when students outsource their learning to AI. It also shows that “brain first, AI later” to help review work is a good combination.

An experiment on the impact of using ChatGPT in lesson planning showed that it saved 30 per cent on preparation time with no impact on lesson quality as assessed by an expert panel. All this emphasises the importance of reviewing the available evidence systematically.

We are seeing some institutions adopt and even develop AI systems that are heavily human, ethics and equity focused. Ofsted has reviewed some of the best practice in its paper “The biggest risk is doing nothing”. Activate Learning has implemented a suite of AI tools, early-stage evaluation of which have shown improved outcomes and well-being. Windsor Forest Colleges Group have developed a teacher support AI, “Winnie”. Basingstoke College of Technology has taken a whole-college approach to upskilling staff and students in AI and giving them a license to innovate responsibly.

Deliberately designing AI systems to stretch learners rather than bypass their learning is key. Developing datasets with fewer systemic biases and training AI on them, including available open-source AI, can help reduce biases.

And we need to widen access to the development of critical thinking and communication skills that enable individuals to adapt to future AI innovations.

Data-safe environments are essential to protect private data. Whilst the actions of one individual or college are not going to significantly dampen environmental impacts, we should be as mindful of the carbon impact of our actions when using AI, just as when driving our car.

The Finnish government has committed to pursuing human-centred and ethical AI, whilst supporting its integration into education. Estonia has encouraged similar whilst leaving education institutions to innovate. Safe, ethical and responsible use is in their national curriculum.

Our DfE has recently issued a policy paper on generative AI in education, and appears to be determined to see AI spread.

We will be working through our partnerships with sector bodies to see wider adoption of responsible AI. The whole skills community needs to get this right – at a whole system level. There is much that is encouraging in both policy and practice. There now needs to be collective action to make positive, human-centred tech happen.

Latest news and insights

  • Two engineers wearing helmets using a tablet.

    News

    £900k VocTech Activate 2026 grant fund announced to test and scale vocational learning ideas

    If you’ve got a bold idea for how tech could transform vocational learning, now is the time to bring it to life.

  • News

    Ufi and supported organisations celebrate wins at 2025 LTAs

    An outstanding achievement this year from Ufi-supported organisations.

  • Student helping a woman use a VR headset in a college.

    News

    New projects awarded £200,000 to develop employability skills

    Bodyswaps and Sixty Learn awarded funding as part of the NCFE Assessment Innovation Fund.