Loading lesson...
Loading lesson...
This is the seventh of 8 Applied modules. You understand how AI systems are built, deployed, and attacked. Now the question shifts from technical capability to societal accountability: who decides which AI systems are acceptable, and how are those decisions enforced? (24 modules total).

Real-world milestone · March 2024
In March 2024, the European Parliament approved the AI Act, the world's first thorough legislation governing artificial intelligence. The Act classifies AI systems into four risk tiers and applies progressively stricter requirements to each tier. Systems deemed unacceptable risk (social scoring, real-time biometric surveillance in public spaces) are banned outright. High-risk systems (hiring tools, credit scoring, medical devices) must undergo conformity assessments, maintain technical documentation, and implement human oversight.
The Act took three years to negotiate. Early drafts focused on traditional ML systems. The release of ChatGPT in November 2022, midway through negotiations, forced legislators to add provisions for general-purpose AI models, including transparency requirements and obligations for providers of foundation models.
The EU AI Act is not the only governance framework. The UK established the AI Safety Institute (originally the Frontier AI Taskforce) to evaluate advanced models before deployment. The US issued Executive Order 14110 on Safe, Secure, and Trustworthy AI. NIST published the AI Risk Management Framework (AI RMF). Each takes a different approach, but all share the premise that AI systems require governance proportional to their potential harm.
Should all AI systems face the same level of regulation, or should oversight be proportional to risk?
The EU AI Act matters regardless of where you are based. If your AI system serves users in the EU, it falls under the Act's jurisdiction. And the governance patterns it establishes (risk classification, conformity assessment, transparency obligations) are being adopted globally. This module gives you the framework to navigate the regulatory environment.
With the learning outcomes established, this module begins by examining the eu ai act: risk-based classification in depth.
The EU AI Act classifies AI systems into four tiers based on the risk they pose to health, safety, and fundamental rights:
Unacceptable risk (banned). AI systems that manipulate human behaviour to circumvent free will (subliminal techniques), exploit vulnerable groups, enable social scoring by governments, or perform real-time biometric identification in public spaces for law enforcement (with narrow exceptions). These systems are prohibited entirely.
High risk. AI systems used in critical infrastructure, education and vocational training, employment and worker management, access to essential services (credit scoring, insurance), law enforcement, migration and border control, and administration of justice. These must meet strict requirements: risk management systems, data governance, technical documentation, record-keeping, transparency to users, human oversight provisions, and accuracy and robustness standards. Providers must conduct conformity assessments before placing the system on the market.
Limited risk. AI systems that interact directly with humans (chatbots), generate synthetic content (deepfakes), or perform emotion recognition. These face transparency obligations: users must be informed they are interacting with an AI, and synthetic content must be labelled.
Minimal risk. All other AI systems (spam filters, AI in video games, inventory management). No specific obligations, though voluntary codes of conduct are encouraged.
“The regulation follows a risk-based approach, differentiating between uses of AI that create an unacceptable risk, a high risk, and a low or minimal risk.”
European Commission, 'Proposal for a Regulation on Artificial Intelligence' (2021) - Article 6, Classification
This risk-based approach was deliberately chosen over a technology-based approach (which would regulate specific techniques like deep learning) or an application-agnostic approach (which would apply the same rules to all AI). The risk-based model means the same technology can face different requirements depending on how it is used: a neural network in a spam filter (minimal risk) faces no obligations, while the same architecture in a hiring tool (high risk) faces extensive requirements.
With an understanding of the eu ai act: risk-based classification in place, the discussion can now turn to the uk ai safety institute, which builds directly on these foundations.
The UK established the AI Safety Institute (AISI) in November 2023, originally as the Frontier AI Taskforce. Unlike the EU's legislative approach, the UK adopted a pro-innovation, sector-specific regulatory strategy. Rather than creating a single thorough law, the UK government tasked existing regulators (the FCA for finance, the CQC for healthcare, Ofcom for communications) with applying AI-specific guidance within their domains.
AISI's role is evaluating frontier AI models (the most capable general-purpose models) for catastrophic risks before and after deployment. The institute conducts technical evaluations, publishes safety assessments, and develops evaluation methodologies. It operates on a voluntary cooperation basis with model developers, though the UK government has signalled willingness to legislate if voluntary commitments prove insufficient.
The UK approach is fundamentally different from the EU approach. The EU regulates by risk category (any AI system in a high-risk domain faces the same requirements). The UK regulates by sector (the financial regulator decides what AI rules apply in finance). Both approaches have trade-offs: the EU provides legal certainty but risks over-regulation of low-risk applications; the UK provides flexibility but risks inconsistency across sectors.
With an understanding of the uk ai safety institute in place, the discussion can now turn to nist ai rmf and model cards, which builds directly on these foundations.
The NIST AI Risk Management Framework (AI RMF 1.0), published in January 2023, provides a voluntary framework for managing AI risks across the lifecycle. It defines four core functions: Govern (establish policies and accountability), Map (identify and classify risks), Measure (assess and track risks), and Manage (prioritise and respond to risks). Unlike the EU AI Act, the NIST AI RMF is not legally binding, but it is widely adopted as a compliance-readiness framework.
Model cards, proposed by Mitchell et al. at Google in 2019, are standardised documentation for AI models. A model card describes the model's intended use, training data, evaluation metrics, performance across demographic groups, known limitations, and ethical considerations. Model cards serve a similar function to nutrition labels on food: they do not make the product safe, but they give users the information needed to assess whether the product is appropriate for their use case.
The EU AI Act requires technical documentation for high-risk systems that covers much of what model cards contain, plus additional requirements around data governance and human oversight. Organisations that already produce thorough model cards have a head start on EU compliance.
“Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, relevant to the intended application domains.”
Mitchell, M. et al., 'Model Cards for Model Reporting', FAT* Conference (2019) - Section 1: Introduction
Model cards were proposed to address the transparency gap in ML: models were often released without documentation of their limitations, biases, or intended scope. The model card framework has been adopted by Hugging Face, Google, and other major platforms as a standard for model documentation.
Common misconception
“AI governance is only relevant to companies building large models.”
The EU AI Act applies to any organisation that deploys AI systems in the EU, regardless of whether they built the model. If you integrate a third-party model into a hiring tool, you are a 'deployer' of a high-risk AI system and face compliance obligations including transparency to users, human oversight, and incident reporting. Even using an off-the-shelf API can trigger regulatory requirements if the application falls into a high-risk category.
With an understanding of nist ai rmf and model cards in place, the discussion can now turn to ai impact assessments, which builds directly on these foundations.
An AI impact assessment evaluates the potential effects of an AI system on individuals, groups, and society before deployment. It is the AI equivalent of an environmental impact assessment: you analyse the consequences before you build, not after the damage is done.
A rigorous impact assessment covers: the system's purpose and intended users; the data it consumes and how that data was collected; performance metrics disaggregated by demographic groups; potential for bias and discrimination; the consequences of false positives and false negatives for affected individuals; mechanisms for human oversight and appeal; and a plan for ongoing monitoring and remediation.
Canada's Algorithmic Impact Assessment (AIA) is one of the most mature examples. Federal agencies must complete an AIA before deploying any automated decision system. The assessment produces a risk score that determines the level of oversight, transparency, and due process required. High-impact systems require peer review, public notice, and mechanisms for individuals to challenge automated decisions.
With an understanding of ai impact assessments in place, the discussion can now turn to risk classification in practice, which builds directly on these foundations.
Classifying an AI system into the correct risk tier requires understanding both the technical capability and the deployment context. A facial recognition model is minimal risk when used to sort a personal photo library. The same model is high risk when used for employment screening. The same model is unacceptable risk when used for real-time surveillance in public spaces. The technology is identical; the risk classification depends entirely on how and where it is deployed.
This context-dependence creates practical challenges. Organisations must assess every deployment context, not just every model. A general-purpose model provider cannot predict all downstream uses, which is why the EU AI Act places obligations on both providers (who build the system) and deployers (who use it in a specific context). The provider must supply technical documentation and declare intended uses. The deployer must ensure the system is appropriate for their specific context and that required safeguards are in place.
Risk classification is not a one-time exercise. As the deployment context evolves (new user populations, new geographic regions, new data sources), the risk classification must be reassessed. A system classified as limited risk at launch might become high risk when it is integrated into a critical decision-making workflow.
Common misconception
“The EU AI Act bans AI in high-risk domains.”
The Act does not ban high-risk AI systems. It imposes requirements: risk management, data governance, transparency, human oversight, accuracy standards, and conformity assessment. Systems that meet these requirements can be deployed. The only systems that are banned are those classified as unacceptable risk (social scoring, manipulative techniques, real-time biometric surveillance for law enforcement with limited exceptions). The Act is designed to enable trustworthy AI deployment, not to prevent it.
A company builds an AI tool that automatically screens job applications and rejects candidates below a threshold score. Under the EU AI Act, what risk tier does this system fall into?
A model card for an image classifier states that the model was trained primarily on images of light-skinned individuals and has not been evaluated on diverse skin tones. What governance function does this serve?
The NIST AI RMF defines four core functions: Govern, Map, Measure, and Manage. A team is identifying which stakeholders are affected by their AI system and cataloguing potential harms. Which function are they performing?
European Parliament, 'Regulation (EU) 2024/1689, Artificial Intelligence Act' (2024)
Title III: High-Risk AI Systems; Article 5: Prohibited Practices
The primary legal text of the EU AI Act. Defines risk tiers, prohibited practices, high-risk categories (Annex III), and compliance obligations for providers and deployers.
NIST, 'AI Risk Management Framework (AI RMF 1.0)' (2023)
Core: Govern, Map, Measure, Manage
The US voluntary framework for AI risk management. Provides practical guidance for organisations implementing AI governance without prescribing specific technical solutions.
Mitchell, M. et al., 'Model Cards for Model Reporting', FAT* Conference (2019)
Full paper
Proposed the model card framework for standardised ML model documentation. Widely adopted by Hugging Face, Google, and the broader ML community as the standard for model transparency.
UK Government, 'A Pro-Innovation Approach to AI Regulation', White Paper (2023)
Chapter 3: Regulatory Framework
Defines the UK's sector-specific approach to AI regulation, the five cross-cutting principles (safety, transparency, fairness, accountability, contestability), and the rationale for avoiding a single thorough AI law.
Government of Canada, 'Algorithmic Impact Assessment Tool' (2020)
Assessment methodology
One of the most mature government AI governance tools. Requires federal agencies to assess automated decision systems before deployment and determines oversight levels based on impact scoring.
You now understand the governance and regulatory landscape for AI systems. The Applied capstone that follows integrates everything from Modules 9-15: you will design an AI content moderation pipeline for a social media platform, addressing model selection, deployment strategy, security threats, and regulatory compliance in a single design exercise.