This is a **practice assessment**. It is designed to sharpen judgement and create CPD-friendly reflections. It is not a timed exam.
AI practice assessment. Foundations
12 questions

Scenario: A model is 98% accurate but still causes harm. What is your first suspicion?

evaluation

Scenario: A spam model relies heavily on number of links. Why is that risky?

data

Scenario: You trained and tested on data from the same week. What failure can appear later?

drift

Scenario: A model output is used to automatically reject applications. What is the safer default?

governance

Labels are created by humans under time pressure. What is the predictable risk?

labels

Scenario: You accidentally trained on features created after the outcome date. What happened?

evaluation

Scenario: Only 1% of cases are positive. Accuracy is 99%. What should you check next?

metrics

Scenario: A stakeholder asks for full automation to cut costs. What is the first governance question?

governance

Scenario: You want to store chat logs to improve the model. What is the most defensible default?

privacy

Scenario: The model is confident even when wrong. What metric helps you detect this?

calibration

Scenario: You’re not sure the model is safe. What rollout approach reduces harm fastest?

deployment

Scenario: Users treat model outputs as truth. What product change reduces over-reliance?

human-factors
Add CPD reflection (optional)
One short paragraph makes your CPD evidence much stronger.
If any answer surprised you, write a one-paragraph note: what assumption changed, what evidence you would look for, and what you would do differently next time.

Quick feedback

Optional. This helps improve accuracy and usefulness. No accounts required.