Digital Strategy and Enterprise Scale · Module 4
Measurement, risk, and roadmaps
Digitalisation is never complete, so you need a way to steer.
Previously
Platforms, ecosystems, and governance
A platform only works when governance is clear.
This module
Measurement, risk, and roadmaps
Digitalisation is never complete, so you need a way to steer.
Next
Digitalisation Advanced practice test
Test recall and judgement against the governed stage question bank before you move on.
Progress
Mark this module complete when you can explain it without rereading every paragraph.
Why this matters
A programme reports “number of dashboards built” and “number of APIs published”.
What you will be able to do
- 1 Explain measurement, risk, and roadmaps in your own words and apply it to a realistic scenario.
- 2 Risk and roadmaps become manageable when measurement feeds decisions.
- 3 Check the assumption "Measures are meaningful" and explain what changes if it is false.
- 4 Check the assumption "Risk is revisited" and explain what changes if it is false.
Before you begin
- Comfort with earlier modules in this track
- Ability to explain trade-offs and risks without jargon
Common ways people get this wrong
- Big bang plans. Plans without feedback fail late and expensively.
- Paper risk management. Risk registers that do not change controls are not management.
Main idea at a glance
Roadmap view
Measure, learn, and adjust.
Stage 1
Phase 1
Stabilise data quality and establish ownership. Fix the foundations before scaling. Clean up data pipelines, assign stewards, define quality metrics, and create the governance structures that will support phases 2 and 3.
I think Phase 1 is the least exciting and most important phase. It is where you build the operational muscle that makes scaling possible. Skip it and Phase 2 becomes an exercise in scaling problems.
This is a cycle, not a waterfall. Evidence from each phase reshapes the next iteration.
Digitalisation is never complete, so you need a way to steer. A risk appetite guides how fast you move. A roadmap keeps teams aligned.
If you cannot measure adoption, quality, and stability together, you will eventually drift.
Worked example. Vanity metrics that funded the wrong work
Worked example. Vanity metrics that funded the wrong work
A programme reports “number of dashboards built” and “number of APIs published”. Those numbers go up, and leadership feels good. Meanwhile, journey completion rates and data quality remain flat. The programme then optimises for output, not outcome, because that is what it is rewarded for.
Common mistakes in measurement and roadmapping
Roadmapping anti-patterns
Avoid these traps to keep planning evidence-led.
-
Choosing easy metrics over meaningful metrics
Prioritise user and reliability outcomes, not activity counts.
-
Keeping fixed roadmaps despite new evidence
Roadmaps must adapt when operating reality changes.
-
Maintaining non-actionable risk registers
Tie each risk to controls and decisions, not static lists.
-
Treating risk appetite as a slogan
Use risk appetite as an explicit decision rule for trade-offs.
Verification. Evidence-led roadmap review
Evidence-led roadmap review
Run this review at each planning checkpoint.
-
Outcome impact per roadmap item
State exactly which outcome changes and how it is measured.
-
Top risks and controls
Prioritise the top three risks and explicit mitigations.
-
Capacity creation by stopping work
Identify what is paused or removed to fund the new phase.
-
Pause and rollback criteria
Define clear thresholds that trigger pause or reversal.
Systems thinking. Feedback loops and unintended behaviour
Digitalisation connects parts of a system that used to be loosely coupled. That creates feedback loops. Feedback loops can stabilise a system or destabilise it. This is why measurement is not a reporting task. Measurement is part of control.
A simple feedback loop model
Measure, decide, act, then measure again
Stage 1
Measure
Collect signals from the system. Response times, error rates, adoption metrics, cost per transaction. The measurement must be timely enough to inform the next decision and honest enough to show bad news.
Impatience kills feedback loops. If you act again before the first action has time to show its effect, you create oscillation.
Worked example. A good metric that created worse behaviour
Worked example. A good metric that created worse behaviour
A team is measured on “tickets closed”. They close tickets faster by closing them early and re-opening later, or by pushing work to another queue. The metric improved, the service got worse, and trust collapsed.
My opinion is that if a metric can be gamed, it will be gamed. Not because people are evil, but because people respond to incentives under pressure. The fix is to measure outcomes and the cost of failure, not activity.
Verification. A measurement pack that earns trust
Measurement pack for trust
Track all five dimensions together to avoid blind spots.
-
Outcome metric
Measure what users actually experience.
-
Reliability metric
Track errors, latency percentiles, and rework rate.
-
Risk metric
Monitor incidents, audit findings, and privacy events.
-
Adoption metric
Track active use and drop-off segments.
-
Review cadence
Define who decides changes and how frequently.
CPD evidence you can defend
CPD evidence checklist
Capture these outputs to demonstrate advanced application.
-
What I studied
Target state prioritisation, ecosystem stewardship, standards, and evidence-led roadmapping.
-
What I produced
A target-state canvas, ecosystem map, and phased roadmap with risks and metrics.
-
What changed in my practice
State one durable rule, for example requiring named owners and measurable outcomes.
-
Evidence artefact
Provide a one-page summary of outcome metrics, next phase, and control plan.
Mental model
Measure and steer
Risk and roadmaps become manageable when measurement feeds decisions.
-
1
Measure
-
2
Risk
-
3
Roadmap
-
4
Deliver
Assumptions to keep in mind
- Measures are meaningful. If measures are weak, roadmaps become opinions.
- Risk is revisited. Risk changes with the system. Review on a cadence.
Failure modes to notice
- Big bang plans. Plans without feedback fail late and expensively.
- Paper risk management. Risk registers that do not change controls are not management.
Key terms
- risk appetite
- The level of risk an organisation is willing to accept.
- roadmap
- A staged plan that sequences change over time.
Check yourself
Quick check. Measurement and roadmaps
0 of 6 opened
Why define risk appetite
It guides how fast you move and which controls you need.
What should a roadmap include
Phases, outcomes, and clear measures of progress, not only a list of projects.
Why measure adoption and quality together
So you do not trade speed for hidden failure and user harm.
What happens when you cannot measure outcomes
You drift and lose trust because you cannot show what improved.
Why revisit a roadmap often
Evidence changes and the plan must adjust.
Name one common measurement mistake
Tracking output or speed without stability, quality, and outcome.
Artefact and reflection
Artefact
A concise design or governance brief that can be reviewed by a team
Reflection
Where in your work would explain measurement, risk, and roadmaps in your own words and apply it to a realistic scenario. change a decision, and what evidence would make you trust that change?
Optional practice
Pick KPIs and set risk appetite to shape the next phase.