Artificial Intelligence

DEFINED

Artificial intelligence refers to computational systems that perform tasks normally requiring human judgement, perception, or decision making, where behaviour is shaped by data, optimisation, or learning rather than fixed rules alone. This includes machine learning systems, statistical classifiers, and generative models, and excludes purely rule based automation without adaptive behaviour.

Responsible AI

DEFINED

Responsible AI refers to the intended outcome of designing and using AI systems in ways that avoid foreseeable harm, respect human values, and maintain public trust. Responsible AI describes what is desired, not how it is achieved. Governance mechanisms are required to make these intentions actionable and reviewable.

AI governance

DEFINED

AI governance refers to the structures, processes, roles, artefacts, and decision points that make responsible AI outcomes repeatable, reviewable, and defensible. Governance is not synonymous with control or restriction. It is the practice of making decisions explicit, assigning accountability, and ensuring that intent, risk, and limitations are visible to others.

AI risk

DEFINED

AI risk refers to the potential for harm arising from the design, training, deployment, interpretation, or use of an AI system. This includes existing harms amplified by AI, emergent system behaviours, and downstream misuse or misinterpretation. Not all risks apply to all systems. Determining relevance is a core governance task.

Human oversight

DEFINED

Human oversight refers to the degree to which humans remain involved in reviewing, influencing, or controlling decisions informed by an AI system. Oversight may take the form of Human-in-the-Loop, where a human reviews each decision, Human-on-the-Loop, where a human supervises system behaviour, or Human-in-Command, where a human retains ultimate decision authority. In this handbook, human oversight is treated as a governance mechanism that must be explicitly defined and evidenced.

Human-in-the-Loop

DEFINED

Human-in-the-Loop refers to systems where a human must review or approve outputs before they influence real-world decisions. This approach is commonly used to manage uncertainty, reduce automation bias, and retain accountability during early deployment. Human review does not eliminate risk. It shifts where responsibility and failure may occur.

Accountability

DEFINED

Accountability refers to clarity over who is responsible for decisions influenced by an AI system at a given stage. Governance requires that decision authority, resp onsibility for scope changes, and ownership of review and progression are explicit. Ambiguous accountability is a common blocker to adoption.

Intended use

DEFINED

Intended use refers to the specific role a system is designed to perform. Non-purpose refers to uses that are explicitly ruled out. Governance requires both to be documented, as misinterpretation of system capability is a frequent source of risk.

Governance artefact

DEFINED

A governance artefact is a durable document that captures decisions, assumptions, scope, risks, accountability, and limitations in a reviewable form. Examples used throughout this handbook include the governance snapshot, risk categorisation, governance rationale, evidence pack, and adoption readiness summary. Artefacts enable traceability and defensibility.

Deferral

DEFINED

Deferral refers to a conscious governance decision that a valid action is premature at the current stage. Proper deferral includes the reason for deferral, the trigger for activation, and the accountable owner. Unrecorded deferral leads to governance drift.

Proportionality

DEFINED

Proportionality refers to aligning governance effort with system maturity, potential impact, and degree of uncertainty. In research contexts, proportional governance means doing enough to make decisions defensible while explicitly documenting what is deferred and why.

Evidence

DEFINED

Evidence refers to artefacts and records that demonstrate responsible practice. Evidence enables review and challenge but does not guarantee correctness or safety. Examples include scoping statements, risk rationales, evaluation results, and documented limitations.

Machine learning

DEFINED

Machine learning refers to techniques in which systems learn patterns or decision functions from data rather than being explicitly programmed. Machine learning systems are probabilistic. Their outputs represent likelihoods inferred from training data rather than deterministic truth. Governance relevance arises from data dependence, statistical error, and downstream use.

Supervised
learning

DEFINED

Supervised learning refers to machine learning approaches trained on labelled data, where historical decisions or annotations act as reference outputs. In many media contexts, labels reflect prior human judgement rather than objective fact, introducing bias and inconsistency that must be treated as governance issues.

Foundational
models

DEFINED

Foundation models are large scale models trained on broad datasets and adapted for multiple downstream tasks. They introduce governance considerations relating to training data provenance, licensing constraints, opacity, and dependence on third party providers. Responsibility does not disappear when using foundation models. It shifts.

Fine tuning

DEFINED

Fine tuning refers to additional training of a model on task specific data. Fine tuning introduces new data dependencies and shifts accountability toward the adapting organisation.

Retrieval Augmented
Generation

DEFINED

Retrieval Augmented Generation refers to architectures in which a generative model is augmented with a retrieval component that sources information from an external corpus at inference time. Such systems do not eliminate hallucination or error. They change the failure mode. Governance attention shifts toward corpus quality, traceability, access control, and update processes.

Prompting

DEFINED

Prompting refers to shaping model behaviour through structured inputs rather than retraining. In this handbook, prompts are treated as configuration artefacts because changes to prompts can materially alter system behaviour and therefore require governance.

Inference

DEFINED

Inference refers to the use of a trained model to generate outputs on new inputs. Most real world risk materialises at inference time. Assurances about training do not remove inference time governance obligations.

Training data

DEFINED

Training data refers to the dataset used to train or fine tune a model. Governance relevance includes bias, representativeness, licensing, provenance, embedded historical decisions, and potential drift over time. Training data is a primary source of system behaviour.

Evaluation

DEFINED

 Evaluation refers to the process used to assess system performance, behaviour, and limitations under defined conditions. Governance requires clarity about what is measured, transparency about limitations, and recognition that evaluation context may differ from real world use.

Calibration

DEFINED

Calibration refers to how well model confidence aligns with actual correctness. Poor calibration increases the risk of over reliance and automation bias. This becomes governance relevant when confidence scores are exposed to users.

Hallucination

DEFINED

Hallucinations are outputs that are fluent and plausible but factually incorrect, fabricated, or unsupported by underlying data. They are a structural property of many generative systems and require explicit handling.

Automation
bias

DEFINED

Automation bias refers to the tendency for humans to over rely on automated outputs, particularly when systems appear authoritative or consistent. Governance responses include human oversight and clear decision ownership.

Model confidence

DEFINED

Model confidence refers to the apparent certainty with which outputs are presented. Poorly calibrated confidence can create over reliance and misinterpretation.

Drift

DEFINED

Model drift refers to degradation in system performance over time due to changes in data, behaviour, or context. Drift is often invisible without monitoring and becomes a governance obligation in applied contexts.

Uncertainty

DEFINED

Uncertainty refers to limits in knowledge about system behaviour, data, or real world conditions. Responsible governance requires uncertainty to be identified, documented, and communicated rather than hidden behind performance claims.

Ground truth

DEFINED

Ground truth refers to the reference data or decisions used to evaluate system outputs. In many media applications, ground truth reflects historical human judgement rather than objective fact, and this limitation must be recognised.

Explainability

DEFINED

Explainability refers to the ability to provide human understandable accounts of system behaviour. Interpretability refers to the extent to which internal system mechanisms can be meaningfully examined. Claims in these areas must be bounded and defensible.

Reliability

DEFINED

Reliability refers to the consistency and predictability of system behaviour relative to its intended use. Robustness refers to behaviour under variation, edge cases, or adversarial conditions. Generalisation refers to how well system behaviour transfers beyond training or evaluation conditions. Weakness in any of these areas increases uncertainty and may justify deferral.