In May 2025, GovernorHub is pleased to be releasing a new AI Assistant in Beta. The assistant is an AI chat tool specifically tailored for those involved in school governance. It uses the latest generation of large language models (LLMs), as well as drawing on content from GovernorHub Knowledge to help you feel confident and equipped to support and challenge your school/academy.
We are asking for volunteers to try out this feature whilst it is in Beta to help us with our final decisions on its development. If you would like to try it out please get in touch with our support team (support@governorhub.com or via the blue chat button).
How is this different from using OpenAI's GPT-4.1 or other AI models?
We are using the very latest AI models (at present our feature is based on OpenAI's GPT 4.1) but prompting the AI to give you more targeted, high-value responses in more useful formats. The responses can also draw from every article in our Knowledge content, giving it access to the same up-to-date, high-quality information that governing boards use all across the country. You should therefore get better and more complete answers from GovernorHub’s AI Assistant compared to using ChatGPT or other LLMs.
Who can use our AI Assistant?
Our AI assistant is available to all GovernorHub users with a Knowledge subscription. As it is currently in beta testing, there is no charge for using these features, and we anticipate that they will be included in the cost of your existing subscription going forward.
What does a beta feature mean?
Being in 'Beta' is a common term in software development, used to describe a new feature that requires user feedback before full release. Beta means that we encourage you to play with it and let us know your thoughts so that we can combine your feedback with data from our own testing and development processes to finalise its features.
What sort of errors could I see in responses?
Factuality and hallucinations: Language models are known to 'hallucinate,' confidently asserting facts that may not be true. We advise exercising your judgment and independently verifying information you receive from GovernorHub’s AI Assistant, especially where you may rely on that information. Do note, all responses given by our AI assistant include links to the original GovernorHub Knowledge articles it used to form its answer - follow these links to clarify any details you feel you need to.
Historic data: All LLMs are trained on historic data – so, unlike content from GovernorHub Knowledge, responses from LLMs are typically months, and sometimes years, out of date. Our AI assistant reads GovernorHub Knowledge content to help it answer your queries but its underlying model may not have up to date information.
Bias: Language models inherit biases and stereotypes from their training data. While we have made efforts to limit these within our AI-powered features, they may still occasionally emerge.
The AI Assistant doesn't know what it doesn't know: AI Assistant does not have real-time access to the internet and will not be aware of all recent events, DfE policy changes, or guidance updates. It also has limited understanding of the data used to train it, how it processes user data, and its own terms of service.
Gullibility: Language models can occasionally be tricked into producing unsafe or inappropriate content on the premise that it is “hypothetical,” “for research,” or describing an imaginary situation. AI Assistant can exhibit similar patterns.
Lack of memory: AI Assistant currently has no memory of previous conversations you have had with it. This means it will not remember facts or topics you have shared with it in the past.
How are responses from the AI assistant different from content on GovernorHub Knowledge?
Articles and policies on GovernorHub Knowledge are based on the latest guidance from the government, sector bodies, and best practice. They are carefully researched, written, edited, and cross-checked by our content team to be accurate, up to date, and practical.
The AI Assistant will answer governance questions using our content and help you with your governance work.
Our editorial team does not check individual responses. You should not rely on AI Assistant’s answers to be as accurate, up to date, or as balanced and nuanced as the advice given by our expert team.
We do not recommend using AI Assistant for legal or safeguarding matters.
Despite these caveats, we believe you will find the AI Assistant feature extremely helpful across a large range of tasks – saving you time and helping you make better decisions.
How shouldn’t you use the AI Assistant or any of our AI-powered features?
Not for personal information: Do not share or seek personal data about staff, governors, pupils, parents, carers, or any individual.
Not for sensitive issues: Avoid using the features for highly sensitive or confidential matters.
Not as a decision-making device or for guidance on legal or safeguarding matters: Do not use the AI features as your sole source of information for critical decisions.
Not for legal advice: Do not treat responses as legal counsel or definitive compliance guidance, especially on sensitive topics like safeguarding.
What approach to AI is GovernorHub taking?
Accuracy and providing the 'knowledge to act' sit at the heart of GovernorHub’s mission and culture. Anyone involved in school governance rightly expects that the answers and technology we provide are safe, trustworthy, and reliable.
How are you working to improve the quality of your AI Assistant?
Accuracy and Safety
Delivering accuracy and safety is an iterative process. We continually review and improve all our services to ensure they comply with our policies and principles. AI Assistant (currently in beta) is subject to this same iterative process.
AI technology is in its early stages and far from perfect. As we at GovernorHub work to improve our techniques and methodology, we'll share updates publicly.
Review and Improvement
There are 4 elements that form our approach to reviewing and improving the behaviour of our AI-powered features:
Monitoring: We automate monitoring to understand usage, the quality of responses, and where our models might be failing. These systems help uncover unsafe patterns and help us prioritise issues to fix.
Log review: Quantitative methods are just one part of the AI assistant structure. GovernorHub examines report logs flagged by members to improve accuracy and safety.
‘Red teaming’: We actively try to undermine our safety and anti-bias mechanisms as part of our improvements and fine-tuning.
Your feedback matters: A major way we improve is through the feedback and suggestions from users of our AI features. If you see something that should be fixed or improved, we encourage you to reach out to us.