Foundations · Stage test

AI Foundations stage test

No governed timed route exists for this stage yet, so this page gives you an honest untimed stage-end check built from the published bank.

Format Untimed self-check
Questions 12
Best time to use it After the stage modules and practice

Question 1

Scenario: A model is 98% accurate but still causes harm. What is your first suspicion?

  1. The model is overfitting because it trained too long
  2. Errors are concentrated in a minority group or high-impact cases
  3. The model needs more parameters
  4. The UI is probably the only problem
Reveal answer

Correct answer: Errors are concentrated in a minority group or high-impact cases

Question 2

Scenario: A spam model relies heavily on number of links. Why is that risky?

  1. Links are always a sign of spam
  2. The model may learn a shortcut correlated in training but not causal
  3. Link counting is too expensive to compute
  4. It violates encryption
Reveal answer

Correct answer: The model may learn a shortcut correlated in training but not causal

Question 3

Scenario: You trained and tested on data from the same week. What failure can appear later?

  1. Drift as real inputs change
  2. The GPU will overheat
  3. The model becomes deterministic
  4. The labels become encrypted
Reveal answer

Correct answer: Drift as real inputs change

Question 4

Scenario: A model output is used to automatically reject applications. What is the safer default?

  1. Full automation with no appeal path
  2. Human review for high-impact cases with accountability and monitoring
  3. Raise the temperature for more creativity
  4. Only collect more data and ignore governance
Reveal answer

Correct answer: Human review for high-impact cases with accountability and monitoring

Question 5

Labels are created by humans under time pressure. What is the predictable risk?

  1. Label noise and bias
  2. Perfect ground truth
  3. Fewer features
  4. Lower compute cost
Reveal answer

Correct answer: Label noise and bias

Question 6

Scenario: You accidentally trained on features created after the outcome date. What happened?

  1. Label leakage that makes tests look unrealistically good
  2. Better generalisation
  3. Lower variance automatically
  4. Safer deployment by default
Reveal answer

Correct answer: Label leakage that makes tests look unrealistically good

Question 7

Scenario: Only 1% of cases are positive. Accuracy is 99%. What should you check next?

  1. Precision/recall and threshold trade-offs
  2. Only model size
  3. Only GPU type
  4. Only prompt wording
Reveal answer

Correct answer: Precision/recall and threshold trade-offs

Question 8

Scenario: A stakeholder asks for full automation to cut costs. What is the first governance question?

  1. What is the worst credible harm and who is accountable for it?
  2. Which cloud vendor is used?
  3. How many parameters does the model have?
  4. Can we remove monitoring to save time?
Reveal answer

Correct answer: What is the worst credible harm and who is accountable for it?

Question 9

Scenario: You want to store chat logs to improve the model. What is the most defensible default?

  1. Collect the minimum needed with clear purpose, retention, and access controls
  2. Collect everything forever because it might be useful
  3. Collect nothing and keep no operational evidence
  4. Email transcripts to the whole team for faster iteration
Reveal answer

Correct answer: Collect the minimum needed with clear purpose, retention, and access controls

Question 10

Scenario: The model is confident even when wrong. What metric helps you detect this?

  1. Calibration (reliability) analysis
  2. Only throughput
  3. Only token count
  4. Only model size
Reveal answer

Correct answer: Calibration (reliability) analysis

Question 11

Scenario: You are not sure the model is safe. What rollout approach reduces harm fastest?

  1. A staged rollout with monitoring, guardrails, and a rollback plan
  2. Big-bang launch to learn faster
  3. Turn off logging to reduce privacy risk
  4. Disable the appeal path to reduce support load
Reveal answer

Correct answer: A staged rollout with monitoring, guardrails, and a rollback plan

Question 12

Scenario: Users treat model outputs as truth. What product change reduces over-reliance?

  1. Show uncertainty limits, require confirmation for high-impact actions, and provide sources/alternatives
  2. Hide explanations to keep the UI clean
  3. Increase temperature for confidence
  4. Remove all warnings to improve adoption
Reveal answer

Correct answer: Show uncertainty limits, require confirmation for high-impact actions, and provide sources/alternatives