CPD timing for this level
Advanced time breakdown
This is the first pass of a defensible timing model for this level, based on what is actually on the page: reading, labs, checkpoints, and reflection.
What changes at this level
Level expectations
I want each level to feel independent, but also clearly deeper than the last. This panel makes the jump explicit so the value is obvious.
Governance, measurement, and defensible roadmaps.
Not endorsed by a certification body. This is my marking standard for consistency and CPD evidence.
- A target state canvas with capability choices, owners, and what you will stop doing.
- A phased roadmap with dependencies plus a risk register and control plan.
- A governance and operating model note: decision rights, funding model assumptions, and how you keep change sustainable.
Digitalisation Advanced
CPD tracking
Fixed hours for this level: 12. Timed assessment time is included once on pass.
View in My CPDAdvanced is about strategy, governance, and sustainable operating models. That maps well to:
- TOGAF (enterprise architecture orientation): target state, capability design, and governance.
- ITIL 4: service stewardship, measurement, and continual improvement.
- APMG style change and transformation thinking: sequencing change, managing risk, and building adoption that survives reality.
Advanced digitalisation is about scale, shared infrastructure, and long term stewardship. You are designing not just a programme, but the system the programme lives in.
Strategy and target state architecture
The hardest part is focus. You cannot improve every capability at once, so you choose the few that unlock the rest.
Strategy is a sequence of hard choices, not a list of everything you want.
Target state view
Move from today to a focused future design.
Current state
- Fragmented data models
- Platform gaps and duplicated services
- Mixed ownership and unclear accountability
Target state
- Shared data platform and stable APIs
- Clear capability ownership
- Measured outcomes and risk controls
Focus on the capabilities that unblock outcomes first, not the ones that are most fashionable.
Quick check: strategy and target state
Why is a target state useful
What does a reference architecture provide
Scenario: Everyone loves the target state diagram, but delivery starts as twenty unrelated projects. What was missing
What should guide capability choices
Why is ownership part of the target state
What keeps a target state credible
🧪Worked example. The target state that became a poster, not a plan
A team produces a beautiful target state diagram. Everyone nods. Delivery then starts as twenty unrelated projects because nothing was prioritised and nothing had owners.
A target state earns its keep when it does three things: it forces choices, it sequences the work, and it makes accountability explicit. My opinion: if the target state cannot answer “what do we do next quarter”, it is a picture, not architecture.
⚠️Common mistakes in target state work
- Listing everything you want, rather than choosing what unlocks outcomes.
- Mixing “capability” with “project” and “tool”. Keep them distinct.
- No operational view: how it will be run, monitored, and supported.
- No migration story: how you move from today to tomorrow safely.
🔎Verification. A target state quality bar
- Clear top 3 outcomes and the measures that prove them.
- Clear capability owners (not just delivery leads).
- A phased roadmap with dependencies and a risk register.
- A “stop doing” list (otherwise you are just adding work).
📝Reflection prompt
If you could only fix one capability that would reduce chaos for everyone, what would it be: identity, data quality, integration, monitoring, or decision rights. Why that one.
Data sharing, models and standards
Interoperability is not just a technical issue. It is legal, operational, and cultural. You need incentives for everyone to keep the model accurate over time.
Shared data model
Many systems, one shared meaning.
Shared IDs
Stable identifiers prevent mismatch errors.
Version control
Changes stay visible and reversible.
Data ownership
Every field has a steward and a rule.
Quick check: data sharing and standards
Why use a canonical model
What makes interoperability hard
Why are data sharing agreements important
What is the risk of unmanaged model changes
Why do shared IDs matter
Who should own a shared field
🧪Worked example. Shared data without shared incentives
A common failure mode in ecosystems is “we agreed the model”, then nobody funds stewardship. Publishers ship changes when it suits them, consumers build workarounds, and the canonical model becomes fiction. Interoperability dies slowly, then all at once.
⚠️Common mistakes in standards at scale
- Assuming technical agreement is enough. Incentives and stewardship matter more.
- No version governance. Changes land in production without coordinated release.
- Treating data sharing agreements as paperwork, not operational reality.
- Underestimating identity and authorisation as the backbone of trust.
🔎Verification. “Can we share data safely” checklist
- Clear purpose and lawful basis for sharing (and a retention story).
- Named stewards and a change control path for the shared model.
- Access controls and audit trails (who accessed what, when, why).
- Monitoring for misuse and unexpected volume or patterns.
📝Reflection prompt
Which is harder in your world: agreeing a standard, or keeping it healthy for five years. What would you put in place to make it survive leadership changes.
Platforms, ecosystems and governance
This is where stewardship matters. If one actor abuses the system, everyone pays the cost.
Ecosystem roles
Balance publishers, consumers, and governors.
Publishers
Provide authoritative data and updates.
Consumers
Use data to build services and insights.
Governors
Set rules and protect trust.
Healthy ecosystems make value easy to access and misuse hard to hide.
Quick check: platforms and ecosystems
What makes a platform different from a project
Why does governance matter for platforms
Who are publishers in an ecosystem
What happens when roles are unbalanced
Why is stewardship important
What is a common platform risk
🧪Worked example. The ecosystem that collapsed under “free riders”
A platform succeeds, consumers build on it quickly, and usage grows. But only one party pays for reliability and support. Over time, incidents rise, trust drops, and everyone quietly rebuilds their own version. That is not a technical failure. That is a governance and funding failure.
⚠️Common mistakes in ecosystems
- No clear rules for participation, quality, and consequences for misuse.
- Funding models that reward consumption but not stewardship.
- Measuring adoption without measuring trust, quality, and incident burden.
🔎Verification. Is the platform actually governable
- Who can change contracts and schemas, and how are changes announced.
- What happens when a consumer breaks rules (rate limits, access revocation, remediation).
- How do you measure trust: data quality, incidents, complaints, and audit findings.
📝Reflection prompt
If you were forced to pick one: would you optimise for speed of onboarding new consumers, or safety and auditability. What evidence would you use to justify your choice.
Measurement, risk and roadmaps
If you cannot measure adoption, quality, and stability together, you will eventually drift.
Roadmap view
Measure, learn, and adjust.
Phase 1
Stabilise data quality and ownership.
Phase 2
Scale platforms and automate reporting.
Phase 3
Expand ecosystem services and analytics.
Roadmaps should change when evidence changes. Keep them honest with data.
Quick check: measurement and roadmaps
Why define risk appetite
What should a roadmap include
Why measure adoption and quality together
What happens when you cannot measure outcomes
Why revisit a roadmap often
What is a common mistake in measurement
🧪Worked example. Vanity metrics that funded the wrong work
A programme reports “number of dashboards built” and “number of APIs published”. Those numbers go up, and leadership feels good. Meanwhile, journey completion rates and data quality remain flat. The programme then optimises for output, not outcome, because that is what it is rewarded for.
⚠️Common mistakes in measurement and roadmapping
- Metrics that are easy to count but not meaningful to users.
- Roadmaps that never change even when evidence changes.
- Risk registers that list everything, then influence nothing.
- Treating risk appetite as a slogan rather than a decision rule.
🔎Verification. Evidence-led roadmap review
- For each roadmap item: what outcome does it change, and how will you measure it.
- What are the top 3 risks, and what controls reduce them.
- What do you stop doing to create capacity.
- What would make you pause or roll back.
🧠Systems thinking. Feedback loops and unintended behaviour
Digitalisation connects parts of a system that used to be loosely coupled. That creates feedback loops. Feedback loops can stabilise a system or destabilise it. This is why measurement is not a reporting task. Measurement is part of control.
A simple feedback loop model
Measure, decide, act, then measure again
Measure
Telemetry, KPIs, leading indicators.
Decide
Rule, model, or policy choice.
Act
Automation, process change, incentives.
Delay
Real systems respond later than dashboards.
🧪Worked example. A good metric that created worse behaviour
A team is measured on “tickets closed”. They close tickets faster by closing them early and re-opening later, or by pushing work to another queue. The metric improved, the service got worse, and trust collapsed.
My opinion: if a metric can be gamed, it will be gamed. Not because people are evil, but because people respond to incentives under pressure. The fix is to measure outcomes and the cost of failure, not activity.
🔎Verification. A measurement pack that earns trust
- Outcome metric: what the user experiences.
- Reliability metric: errors, latency percentiles, rework rate.
- Risk metric: incidents, audit findings, privacy events.
- Adoption metric: who uses it, and who dropped off.
- Review cadence: who decides changes, and how often.
🧾CPD evidence (advanced, still honest)
- What I studied: target state and capability focus, ecosystem stewardship, standards and agreements, and evidence-led roadmaps.
- What I produced: a target state canvas, an ecosystem map, and a phased roadmap with risks and measures.
- What changed in my practice: one decision rule. Example: “If we cannot name an owner and a measurable outcome, it does not go on the roadmap.”
- Evidence artefact: one page showing (1) outcome metrics, (2) the next phase, and (3) the risk controls.
