This is a **practice assessment**. It is designed to sharpen judgement and create CPD-friendly reflections. It is not a timed exam.
AI practice assessment. Applied
12 questions

Scenario: Your RAG bot answers confidently but cites the wrong paragraph. What do you fix first?

rag

Scenario: A prompt change breaks a workflow. What engineering practice should exist?

prompts

Scenario: The model performs well overall but fails for one user segment. What catches this?

evaluation

Scenario: You must choose a threshold. What should it be based on?

thresholds

Scenario: You add lots of context and answer quality drops. What is the most likely reason?

rag

Scenario: Users try to trick the system by changing wording until it misbehaves. What is the right framing?

security

Scenario: A user embeds instructions in a document to make your RAG bot ignore policy. What is this?

security

Scenario: Retrieval returns the right chunk, but the model still answers wrongly. What do you add?

rag

Which evaluation approach is most defensible for a user-facing assistant?

evaluation

Scenario: Users report 'it was fine yesterday'. What do you check first?

monitoring

Scenario: A bot can call an internal 'refund' tool. What is the safest default policy?

security

Scenario: Your RAG system retrieves contradictory policies. What should the assistant do?

rag
Add CPD reflection (optional)
One short paragraph makes your CPD evidence much stronger.
If any answer surprised you, write a one-paragraph note: what assumption changed, what evidence you would look for, and what you would do differently next time.

Quick feedback

Optional. This helps improve accuracy and usefulness. No accounts required.