Artificial Intelligence
DEFINED

Artificial intelligence refers to computational systems that perform tasks normally requiring human judgement, perception, or decision making, where behaviour is shaped by data, optimisation, or learning rather than fixed rules alone. This includes machine learning systems, statistical classifiers, and generative models, and excludes purely rule based automation without adaptive behaviour.

Responsible AI
DEFINED

Responsible AI refers to the intended outcome of designing and using AI systems in ways that avoid foreseeable harm, respect human values, and maintain public trust. Responsible AI describes what is desired, not how it is achieved. Governance mechanisms are required to make these intentions actionable and reviewable.

AI governance
DEFINED

AI governance refers to the structures, processes, roles, artefacts, and decision points that make responsible AI outcomes repeatable, reviewable, and defensible. Governance is not synonymous with control or restriction. It is the practice of making decisions explicit, assigning accountability, and ensuring that intent, risk, and limitations are visible to others.

AI risk
DEFINED

AI risk refers to the potential for harm arising from the design, training, deployment, interpretation, or use of an AI system. This includes existing harms amplified by AI, emergent system behaviours, and downstream misuse or misinterpretation. Not all risks apply to all systems. Determining relevance is a core governance task.

Human oversight
DEFINED

Human oversight refers to the degree to which humans remain involved in reviewing, influencing, or controlling decisions informed by an AI system. Oversight may take the form of Human-in-the-Loop, where a human reviews each decision, Human-on-the-Loop, where a human supervises system behaviour, or Human-in-Command, where a human retains ultimate decision authority. In this handbook, human oversight is treated as a governance mechanism that must be explicitly defined and evidenced.

Human-in-the-Loop
DEFINED

Human-in-the-Loop refers to systems where a human must review or approve outputs before they influence real-world decisions. This approach is commonly used to manage uncertainty, reduce automation bias, and retain accountability during early deployment. Human review does not eliminate risk. It shifts where responsibility and failure may occur.

Accountability
DEFINED

Accountability refers to clarity over who is responsible for decisions influenced by an AI system at a given stage. Governance requires that decision authority, resp onsibility for scope changes, and ownership of review and progression are explicit. Ambiguous accountability is a common blocker to adoption.

Intended use
DEFINED

Intended use refers to the specific role a system is designed to perform. Non-purpose refers to uses that are explicitly ruled out. Governance requires both to be documented, as misinterpretation of system capability is a frequent source of risk.

Governance artefact
DEFINED

A governance artefact is a durable document that captures decisions, assumptions, scope, risks, accountability, and limitations in a reviewable form. Examples used throughout this handbook include the governance snapshot, risk categorisation, governance rationale, evidence pack, and adoption readiness summary. Artefacts enable traceability and defensibility.
