CPD timing for this level

Advanced time breakdown

This is the first pass of a defensible timing model for this level, based on what is actually on the page: reading, labs, checkpoints, and reflection.

Reading
19m
2,678 words · base 14m × 1.3
Labs
60m
4 activities × 15m
Checkpoints
20m
4 blocks × 5m
Reflection
32m
4 modules × 8m
Estimated guided time
2h 11m
Based on page content and disclosed assumptions.
Claimed level hours
12h
Claim includes reattempts, deeper practice, and capstone work.
The claimed hours are higher than the current on-page estimate by about 10h. That gap is where I will add more guided practice and assessment-grade work so the hours are earned, not declared.

What changes at this level

Level expectations

I want each level to feel independent, but also clearly deeper than the last. This panel makes the jump explicit so the value is obvious.

Anchor standards (course wide)
ITIL 4 (service value system)COBIT 2019 (governance of enterprise IT)
Assessment intent
Strategy

Governance, measurement, and defensible roadmaps.

Assessment style
Format: mixed
Pass standard
Coming next

Not endorsed by a certification body. This is my marking standard for consistency and CPD evidence.

Evidence you can save (CPD friendly)
  • A target state canvas with capability choices, owners, and what you will stop doing.
  • A phased roadmap with dependencies plus a risk register and control plan.
  • A governance and operating model note: decision rights, funding model assumptions, and how you keep change sustainable.

Digitalisation Advanced

Level progress0%

CPD tracking

Fixed hours for this level: 12. Timed assessment time is included once on pass.

View in My CPD
Progress minutes
0.0 hours
CPD and certification alignment (guidance, not endorsed):

Advanced is about strategy, governance, and sustainable operating models. That maps well to:

  • TOGAF (enterprise architecture orientation): target state, capability design, and governance.
  • ITIL 4: service stewardship, measurement, and continual improvement.
  • APMG style change and transformation thinking: sequencing change, managing risk, and building adoption that survives reality.
How to use Advanced
At this level you are not designing a programme. You are designing the system the programme lives in.
Good practice
Choose a small set of capabilities to improve and defend the choice. Focus beats enthusiasm every time.
Bad practice
Best practice

Advanced digitalisation is about scale, shared infrastructure, and long term stewardship. You are designing not just a programme, but the system the programme lives in.


Strategy and target state architecture

Concept block
Target state
A target state is a shared picture of how the system should work, not a vendor shopping list.
A target state is a shared picture of how the system should work, not a vendor shopping list.
Assumptions
Target is testable
Ownership is clear
Failure modes
Vendor-first target state
No migration plan
A keeps strategy concrete. A keeps teams aligned as they deliver in parallel.

The hardest part is focus. You cannot improve every capability at once, so you choose the few that unlock the rest.

Strategy is a sequence of hard choices, not a list of everything you want.

Target state view

Move from today to a focused future design.

Current state

  • Fragmented data models
  • Platform gaps and duplicated services
  • Mixed ownership and unclear accountability

Target state

  • Shared data platform and stable APIs
  • Clear capability ownership
  • Measured outcomes and risk controls

Focus on the capabilities that unblock outcomes first, not the ones that are most fashionable.

Quick check: strategy and target state

Why is a target state useful

What does a reference architecture provide

Scenario: Everyone loves the target state diagram, but delivery starts as twenty unrelated projects. What was missing

What should guide capability choices

Why is ownership part of the target state

What keeps a target state credible

🧪

Worked example. The target state that became a poster, not a plan

A team produces a beautiful target state diagram. Everyone nods. Delivery then starts as twenty unrelated projects because nothing was prioritised and nothing had owners.

A target state earns its keep when it does three things: it forces choices, it sequences the work, and it makes accountability explicit. My opinion: if the target state cannot answer “what do we do next quarter”, it is a picture, not architecture.

⚠️

Common mistakes in target state work

  • Listing everything you want, rather than choosing what unlocks outcomes.
  • Mixing “capability” with “project” and “tool”. Keep them distinct.
  • No operational view: how it will be run, monitored, and supported.
  • No migration story: how you move from today to tomorrow safely.

🔎

Verification. A target state quality bar

  • Clear top 3 outcomes and the measures that prove them.
  • Clear capability owners (not just delivery leads).
  • A phased roadmap with dependencies and a risk register.
  • A “stop doing” list (otherwise you are just adding work).

📝

Reflection prompt

If you could only fix one capability that would reduce chaos for everyone, what would it be: identity, data quality, integration, monitoring, or decision rights. Why that one.


Data sharing, models and standards

Concept block
Sharing boundary
Sharing is safe when meaning is shared and controls are enforced at boundaries.
Sharing is safe when meaning is shared and controls are enforced at boundaries.
Assumptions
Standards are enforced
Auditability exists
Failure modes
Semantic fragmentation
Untracked access
At scale, digitalisation depends on shared meaning. A reduces translation work. A keeps trust intact.

Interoperability is not just a technical issue. It is legal, operational, and cultural. You need incentives for everyone to keep the model accurate over time.

Shared data model

Many systems, one shared meaning.

Utility systems
->
Canonical model
->
Market services

Shared IDs

Stable identifiers prevent mismatch errors.

Version control

Changes stay visible and reversible.

Data ownership

Every field has a steward and a rule.

Quick check: data sharing and standards

Why use a canonical model

What makes interoperability hard

Why are data sharing agreements important

What is the risk of unmanaged model changes

Why do shared IDs matter

Who should own a shared field

🧪

Worked example. Shared data without shared incentives

A common failure mode in ecosystems is “we agreed the model”, then nobody funds stewardship. Publishers ship changes when it suits them, consumers build workarounds, and the canonical model becomes fiction. Interoperability dies slowly, then all at once.

⚠️

Common mistakes in standards at scale

  • Assuming technical agreement is enough. Incentives and stewardship matter more.
  • No version governance. Changes land in production without coordinated release.
  • Treating data sharing agreements as paperwork, not operational reality.
  • Underestimating identity and authorisation as the backbone of trust.

🔎

Verification. “Can we share data safely” checklist

  • Clear purpose and lawful basis for sharing (and a retention story).
  • Named stewards and a change control path for the shared model.
  • Access controls and audit trails (who accessed what, when, why).
  • Monitoring for misuse and unexpected volume or patterns.

📝

Reflection prompt

Which is harder in your world: agreeing a standard, or keeping it healthy for five years. What would you put in place to make it survive leadership changes.


Platforms, ecosystems and governance

Concept block
Platform and ecosystem
Ecosystems need shared interfaces and shared incentives to avoid fragmentation.
Ecosystems need shared interfaces and shared incentives to avoid fragmentation.
Assumptions
Interfaces are stable
Governance exists
Failure modes
Integration sprawl
Misaligned incentives
A only works when governance is clear. An needs balanced roles so trust stays high.

This is where stewardship matters. If one actor abuses the system, everyone pays the cost.

Ecosystem roles

Balance publishers, consumers, and governors.

Publishers

Provide authoritative data and updates.

Consumers

Use data to build services and insights.

Governors

Set rules and protect trust.

Healthy ecosystems make value easy to access and misuse hard to hide.

Quick check: platforms and ecosystems

What makes a platform different from a project

Why does governance matter for platforms

Who are publishers in an ecosystem

What happens when roles are unbalanced

Why is stewardship important

What is a common platform risk

🧪

Worked example. The ecosystem that collapsed under “free riders”

A platform succeeds, consumers build on it quickly, and usage grows. But only one party pays for reliability and support. Over time, incidents rise, trust drops, and everyone quietly rebuilds their own version. That is not a technical failure. That is a governance and funding failure.

⚠️

Common mistakes in ecosystems

  • No clear rules for participation, quality, and consequences for misuse.
  • Funding models that reward consumption but not stewardship.
  • Measuring adoption without measuring trust, quality, and incident burden.

🔎

Verification. Is the platform actually governable

  • Who can change contracts and schemas, and how are changes announced.
  • What happens when a consumer breaks rules (rate limits, access revocation, remediation).
  • How do you measure trust: data quality, incidents, complaints, and audit findings.

📝

Reflection prompt

If you were forced to pick one: would you optimise for speed of onboarding new consumers, or safety and auditability. What evidence would you use to justify your choice.


Measurement, risk and roadmaps

Concept block
Measure and steer
Risk and roadmaps become manageable when measurement feeds decisions.
Risk and roadmaps become manageable when measurement feeds decisions.
Assumptions
Measures are meaningful
Risk is revisited
Failure modes
Big bang plans
Paper risk management
Digitalisation is never complete, so you need a way to steer. A guides how fast you move. A keeps teams aligned.

If you cannot measure adoption, quality, and stability together, you will eventually drift.

Roadmap view

Measure, learn, and adjust.

Phase 1

Stabilise data quality and ownership.

Phase 2

Scale platforms and automate reporting.

Phase 3

Expand ecosystem services and analytics.

Roadmaps should change when evidence changes. Keep them honest with data.

Quick check: measurement and roadmaps

Why define risk appetite

What should a roadmap include

Why measure adoption and quality together

What happens when you cannot measure outcomes

Why revisit a roadmap often

What is a common mistake in measurement

🧪

Worked example. Vanity metrics that funded the wrong work

A programme reports “number of dashboards built” and “number of APIs published”. Those numbers go up, and leadership feels good. Meanwhile, journey completion rates and data quality remain flat. The programme then optimises for output, not outcome, because that is what it is rewarded for.

⚠️

Common mistakes in measurement and roadmapping

  • Metrics that are easy to count but not meaningful to users.
  • Roadmaps that never change even when evidence changes.
  • Risk registers that list everything, then influence nothing.
  • Treating risk appetite as a slogan rather than a decision rule.

🔎

Verification. Evidence-led roadmap review

  • For each roadmap item: what outcome does it change, and how will you measure it.
  • What are the top 3 risks, and what controls reduce them.
  • What do you stop doing to create capacity.
  • What would make you pause or roll back.

🧠

Systems thinking. Feedback loops and unintended behaviour

Digitalisation connects parts of a system that used to be loosely coupled. That creates feedback loops. Feedback loops can stabilise a system or destabilise it. This is why measurement is not a reporting task. Measurement is part of control.

A simple feedback loop model

Measure, decide, act, then measure again

Measure

Telemetry, KPIs, leading indicators.

Decide

Rule, model, or policy choice.

Act

Automation, process change, incentives.

Delay

Real systems respond later than dashboards.

🧪

Worked example. A good metric that created worse behaviour

A team is measured on “tickets closed”. They close tickets faster by closing them early and re-opening later, or by pushing work to another queue. The metric improved, the service got worse, and trust collapsed.

My opinion: if a metric can be gamed, it will be gamed. Not because people are evil, but because people respond to incentives under pressure. The fix is to measure outcomes and the cost of failure, not activity.

🔎

Verification. A measurement pack that earns trust

  • Outcome metric: what the user experiences.
  • Reliability metric: errors, latency percentiles, rework rate.
  • Risk metric: incidents, audit findings, privacy events.
  • Adoption metric: who uses it, and who dropped off.
  • Review cadence: who decides changes, and how often.

🧾

CPD evidence (advanced, still honest)

  • What I studied: target state and capability focus, ecosystem stewardship, standards and agreements, and evidence-led roadmaps.
  • What I produced: a target state canvas, an ecosystem map, and a phased roadmap with risks and measures.
  • What changed in my practice: one decision rule. Example: “If we cannot name an owner and a measurable outcome, it does not go on the roadmap.”
  • Evidence artefact: one page showing (1) outcome metrics, (2) the next phase, and (3) the risk controls.

Quick feedback

Optional. This helps improve accuracy and usefulness. No accounts required.