AI Transparency Statement
Practicing what we preach | Last updated: 10 April 2026
Our Philosophy
At AI Governance Hub, we help organisations govern their AI systems responsibly. To maintain credibility and trust, we must be equally transparent about our own use of AI.
This page explains exactly where and how we use artificial intelligence in our platform, what data is involved, and which parts of the platform remain deterministic and AI-free. We believe in radical transparency because that's what we ask of our customers.
Summary: Where AI Is and Is Not Used
Some platform features use AI (Claude by Anthropic)
Knowledge base curation, regulatory intelligence sweep, policy draft assistance, and the support chat use Claude (Anthropic) to process regulatory guidance content. See the full breakdown below.
Core governance features remain fully deterministic
Risk assessments, compliance scoring, AIIA generation, and document management use rule-based algorithms only. No AI influences your scores or decisions. Your governance data is never sent to an AI service.
Features That Use AI (Claude by Anthropic)
The following platform features use the Claude API (Anthropic) to process regulatory and platform content. In each case, we describe exactly what is sent to the AI and what is not.
1. Knowledge Base Intelligence
Our knowledge base includes regulatory guidance articles (UK GDPR, ICO AI guidance, EU AI Act, Equality Act). Claude analyses these articles to:
- Identify content that may be stale or outdated relative to current guidance
- Generate concise summaries of regulatory articles
- Identify gaps in coverage across compliance topics
What is sent to Claude: Regulatory article titles, summaries, and metadata (source URL, publication date). No personal data, no customer AI system details, no risk assessment data.
2. Regulatory Intelligence Sweep
A daily automated sweep monitors authoritative regulatory sources (ICO, EUR-Lex, Gov.uk) for changes that may affect the compliance checklists within the platform. Claude is used to:
- Assess whether a detected change is material to AI governance obligations
- Summarise the impact of a regulatory change on specific compliance areas
- Map regulatory changes to relevant compliance checklist items
What is sent to Claude: Regulatory source text excerpts, checklist item descriptions, and framework context. No customer data, no personal data, no organisation-specific information.
3. Policy Document Assistance
When generating a policy document draft, Claude uses governance framework templates and your selected configuration (AI system type, sector, risk level) to produce a starting draft. You review and edit the output before using it.
What is sent to Claude: Your AI system's name, type, sector classification, and risk level (as you have entered them). The full text of your risk assessments, AIIA responses, uploaded documents, or personal data fields are not included in AI prompts.
4. Support Chat
The in-platform support chat uses Claude to answer questions about how to use the platform, regulatory frameworks, and governance concepts.
What is sent to Claude: Your chat messages and a system prompt describing the platform context. Your governance data (AI systems, risk assessments, compliance checklists, documents) is not included in chat context.
Important: Do not share personally identifiable information, confidential business details, or sensitive data in the support chat.
Features That Do NOT Use AI
The following features are fully deterministic. No AI is involved:
Risk Assessment Scoring
Fully deterministic arithmetic. You answer questions; predefined point values are summed and weighted. The same inputs always produce the same score.
Compliance Dashboard Scoring
Percentage completion only. Score = completed items / total items per framework. No AI, no inference, no weighting beyond what is documented.
AI Impact Assessment (AIIA) Generation
Template-based. You populate a structured form; the platform assembles the document. No AI generates or modifies your AIIA content.
Document Pack Generation
A rules engine assembles document packs based on your system's risk class and completed data. Fully deterministic — no LLM involved.
Document Repository
Files are stored and retrieved as-is. We do not analyse or process document content with AI.
No Automated Decisions About You
All governance decisions (risk classifications, compliance status, system categorisation) are made by you. The platform calculates scores from your inputs but makes no autonomous decisions.
How Our Core Features Work
Risk Assessment
Our risk assessment tool uses a deterministic, rule-based scoring system. You answer a structured set of multiple-choice questions across key governance categories. Each answer is evaluated against predefined criteria, producing a score that maps to a risk level (Critical, High, Medium, or Low). The same answers always produce the same result. No AI, no learning, no inference.
Mitigation recommendations are pre-written and displayed based on which areas score lowest — there is no AI generating or adapting this guidance.
Compliance Scoring
Compliance dashboard scores reflect your completion progress against each regulatory framework. Scores are calculated deterministically from your checklist responses. No AI influences or adjusts these scores.
Our AI Provider
For the AI-assisted features above, we use Claude (Anthropic). Anthropic is based in San Francisco, USA, and operates under a GDPR-compliant data processing agreement with us.
Data sent to Anthropic is limited to regulatory content, article text, and minimal system configuration context as described above. Your personal data, organisation records, risk assessment responses, AIIA content, and uploaded documents are not sent to Anthropic.
Anthropic's API data handling policy is available at anthropic.com/privacy.
Development Tools (AI-Assisted Coding)
Separately, our development team uses AI-assisted tools to build and maintain the platform:
- Claude Code (Anthropic): AI coding assistant used by the development team
These tools help us write code and do not process customer data.
Governing Our Own AI Use
We govern our own use of AI using the same principles we ask of our customers:
Governed Using Our Own Platform
We document our own AI systems in AI Governance Hub. Our risk assessments and AIIAs for platform AI features will be published publicly.
Human Review of AI Outputs
Regulatory intelligence sweep outputs are reviewed by our team before any compliance checklist changes are applied. AI-generated policy drafts require your review before use.
Transparency First
This statement is updated whenever we introduce, change, or remove AI capabilities. If it is ever inaccurate, we invite you to hold us accountable.
Questions About AI Usage?
If you have questions about our use of AI, or if you discover any AI usage not disclosed here, please contact us:
Email: connect@aigovernancehub.uk
Subject line: "AI Transparency Inquiry"
We will respond within 5 business days with a detailed explanation.
Policy Updates
We will update this AI Transparency Statement whenever we introduce, change, or remove AI capabilities. Updates will be communicated via:
- Email notification to active users
- In-app notification
- Updated "Last Updated" date on this page
Our Commitment
We pledge to maintain this level of transparency about AI usage as long as AI Governance Hub exists.
Transparency is not just a value — it's our business model. We cannot help you govern AI responsibly if we are not doing it ourselves.