This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 1 minute read

The DfE updates its Gen AI product safety standards

Stone King welcomes the recent update (19 January 2026) of the Department for Education's product safety guidance for generative AI tools.  

There are entire new sections which seek to ensure the now widely acknowledged potential harms to young people are addressed by EdTech and education settings. The guidance clearly states that corresponding standards may need be extended or modified as additional research and evidence on the impact of generative AI becomes available.  

The new sections and standards cover: 

  • Cognitive development, in response to the risk of “cognitive offload”;
  • emotional and social development, in response to the risk of dependence and replacement of real human relationships;
  • mental health, in response to the risk of adverse outcomes for users in distress; and
  • manipulation, in response to the risk of exploiting users for commercial gain.

There is a positive change to the requirements for filtering out harmful or inappropriate content; rather than using add-ons for this, AI product suppliers must integrate embedded filtering functions. There is also a new requirement for clear statements of purpose and use cases.   

We would emphasise that there are not any substantive changes to the requirements on compliance with the relevant legislation and regulation, namely the Online Safety Act, Data Protection, Intellectual Property, Keeping children safe in education, Filtering and monitoring standards for schools and colleges, Cyber Security Standards for Schools and Colleges, and last but not least, the Public Sector Equality Duty.

We would also emphasise, from the perspective of the education sector buyer, that understanding the purpose and use cases; the safety profile, testing and risk assessment; and addressing these points from within your own framework for AI governance, are key practical steps to take. If you have not already started developing  your own framework for AI governance, please get in touch with our team today.

Generative AI products should monitor, regularly report on, and provide data to teachers on: -the rate of requests for cognitive offloading and the amount of cognitive offloading delivered; -the level of personal and emotional engagement by each user in terms of the nature of information exchanged, without directly disclosing the content of these inputs; and -the duration of usage by each individual learner

Tags

academies and mats, edtech, education, faith schools, further education, independent schools, state-funded schools, artificial intelligence