CPD timing for this level

Foundations time breakdown

This is the first pass of a defensible timing model for this level, based on what is actually on the page: reading, labs, checkpoints, and reflection.

Reading
23m
3,343 words · base 17m × 1.3
Labs
60m
4 activities × 15m
Checkpoints
20m
4 blocks × 5m
Reflection
32m
4 modules × 8m
Estimated guided time
2h 15m
Based on page content and disclosed assumptions.
Claimed level hours
8h
Claim includes reattempts, deeper practice, and capstone work.
The claimed hours are higher than the current on-page estimate by about 6h. That gap is where I will add more guided practice and assessment-grade work so the hours are earned, not declared.

What changes at this level

Level expectations

I want each level to feel independent, but also clearly deeper than the last. This panel makes the jump explicit so the value is obvious.

Anchor standards (course wide)
ITIL 4 (service value system)COBIT 2019 (governance of enterprise IT)
Assessment intent
Foundations

Clear definitions and correct mapping to outcomes.

Assessment style
Format: mixed
Pass standard
Coming next

Not endorsed by a certification body. This is my marking standard for consistency and CPD evidence.

Evidence you can save (CPD friendly)
  • A definitions page: digitisation vs digitalisation vs transformation, plus one example from a service you know.
  • A simple customer journey map with one friction point and the outcome you would measure after fixing it.
  • A before and after operating note: what changed in process, data, and accountability, not only tooling.

Digitalisation Foundations

Level progress0%

CPD tracking

Fixed hours for this level: 8. Timed assessment time is included once on pass.

View in My CPD
Progress minutes
0.0 hours
CPD and certification alignment (guidance, not endorsed):

Digitalisation is a discipline that sits across service design, data, governance, and delivery. This course supports CPD evidence and maps well to respected frameworks such as:

  • ITIL 4: service value systems, continual improvement, and operating realities.
  • TOGAF (enterprise architecture orientation): capabilities, target state thinking, and governance language.
  • BCS business analysis and digital practice expectations: outcomes, clarity, and stakeholder communication.
How to use Foundations
This is enjoyable when you stop treating it like a buzzword hunt and start treating it like a craft.
Good practice
Pick one service you know and keep it in mind throughout. Each concept should help you describe one improvement you would actually ship.
Bad practice
Best practice

Digitalisation is not a tool rollout. It is the deliberate redesign of how value is created using data, platforms, and better ways of working. These foundations focus on outcomes, not buzzwords.


Digitalisation vs digitisation vs “digital”

Concept block
Digitisation vs digitalisation
Digitisation converts. Digitalisation changes the operating loop so outcomes improve.
Digitisation converts. Digitalisation changes the operating loop so outcomes improve.
Assumptions
Outcome is defined
Work changes, not only tools
Failure modes
Tool adoption without change
Busy dashboards

People use “digital” to mean everything and therefore it means nothing. So I am going to be strict about words in Foundations, because the rest of the course depends on it.

Three words that get mixed up

Same technology, different level of change

Digitisation

Turning analogue information into digital form.

Example: scanning a paper form into a PDF.

Digitalisation

Using digital capability to redesign how work happens.

Example: redesigning a service so the form is validated, routed, tracked, and improved using data.

Digital transformation

Organisation-wide change enabled by digitalisation.

Example: new operating model, new incentives, new products, and new ways of measuring value.

🧪

Worked example. A PDF is not a digital service

Digitisation is getting a paper process into a PDF. Digitalisation is making the process actually work end to end. If a customer still has to email the PDF, chase a response, and repeat their details on a phone call, nothing meaningful changed.

⚠️

Common mistakes in definitions

  • Calling a new tool “transformation” without changing the underlying process.
  • Mistaking a dashboard for improvement. Visibility is not the same as action.
  • Treating “digital” as a badge of honour rather than a measurable outcome.

Why digitalisation matters

Concept block
Why it matters
Digitalisation matters when it changes outcomes, not only when it changes tooling.
Digitalisation matters when it changes outcomes, not only when it changes tooling.
Assumptions
Drivers are stated
Outcomes are measurable
Failure modes
Activity without outcomes
Overpromising

Digitalisation matters because expectations are higher, services are more complex, and regulation is tighter. In energy, the pressure is even greater: net zero targets, real time grid data, and consumer trust all depend on digital capability.

If core services cannot be delivered securely and simply through digital channels, you have a digitalisation problem.

Digitalisation is also about focus. It forces clarity on what value looks like and who the change is for. When leaders treat it as a technology project, it usually stalls.

Drivers and context

Digitalisation sits between policy, operations, and people.

Policy

Targets and rules

Net zero, safety, and reporting duties shape priorities.

Operations

Services and flow

Teams redesign journeys and data movement.

People

Outcomes and trust

Customers and operators feel the impact.

DriversOutcomesTrust

Quick check: why digitalisation matters

Why is digitalisation more than a tool rollout

Scenario: A team replaces paper forms with PDFs but customers still chase updates by phone. Is that digitalisation

What happens when digitalisation is treated as a tech project only

Scenario: Name one outcome metric and one leading indicator for 'meter reading submission is easier'

Why do regulations influence digitalisation

🧪

Worked example. “We bought a platform” vs “we improved a service”

A common story: an organisation buys a shiny platform, runs a big programme, and six months later the frontline still copy-pastes between systems. Leaders then say “people are resistant”. My opinion: if the work got harder, people are not resistant. They are rational.

A more honest approach is to pick one service outcome you can measure, then redesign the journey end-to-end. Example: “A customer can submit a meter reading in under two minutes and see confirmation immediately.” That gives you a concrete target for process, data, and platform work.

⚠️

Common mistakes (what I see in real programmes)

  • Buying tools before agreeing outcomes, operating model, and ownership.
  • Measuring activity (number of dashboards, number of tickets) instead of user experience and reliability.
  • Treating “go live” as the end, then starving the service of support and monitoring.
  • Redesigning journeys without involving the people who do the work every day.

🔎

Verification. A one-page “are we doing digitalisation or theatre” check

  • Can you name one user outcome in plain language (not a system feature).
  • Can you show evidence that the journey improved (time, success rate, complaints, rework).
  • Can you point to who owns the service, who owns the data, and who owns the platform.
  • If the service degrades, do you know who gets paged and what they will look at.

📝

Reflection prompt

Think of a “digital” change you have seen that annoyed people. If you had to fix it with one principle, what would it be: reduce steps, reduce uncertainty, reduce handoffs, or reduce risk.


The components of a digitalised system (what has to exist under the surface)

Concept block
Components of a digitalised system
Digitalised systems need components that support flow, measurement, and change.
Digitalised systems need components that support flow, measurement, and change.
Assumptions
Feedback loops exist
Ownership exists
Failure modes
Fragmented systems
No operational plan

Digitalisation is not one technology. It is a stack of capabilities working together. If one layer is weak, the whole system feels fragile, slow, or untrustworthy.

A practical component map

Data, compute, control, and human experience

Sensing and data generation

IoT, logs, user actions, operational telemetry.

Connectivity

Networks, APIs, event streams, integration contracts.

Platforms

Cloud, data platforms, identity, shared services.

Analytics and AI

Forecasting, anomaly detection, optimisation, decision support.

Automation and control

Feedback loops that change behaviour, not only report it.

UX and service design

Interfaces, journeys, trust signals, accessibility.

🔎

Verification. Can you point to each layer in your world

  • Name one data source you trust and one you do not, and why.
  • Name the contract that moves the data (API, file, event).
  • Name where the truth is stored (and who owns it).
  • Name the action that happens because of the data (not just the report).

Data, standards and interoperability

Concept block
Process to data to decision
Digital systems improve when processes create reliable data and data changes decisions.
Digital systems improve when processes create reliable data and data changes decisions.
Assumptions
Processes are observable
Definitions are shared
Failure modes
Spreadsheet islands
Data without action
A is only useful when people trust it. A tells everyone what the data means. A keeps systems aligned.
An is the bridge. is the goal.

Standards and data flow

Shared models make data reusable.

Source systems
->
API layer
->
Common data model
->
Dashboards and services

Shared definitions

Standard fields prevent translation errors.

Stable identifiers

Common IDs link records across systems.

Version control

Schema changes stay visible and safe.

Quick check: data and standards

Why does a schema matter

What does interoperability enable

What is a data model used for

Why are APIs important

Scenario: Two systems both say “customer”, but mean different things. What is the fix

What happens without shared standards

Why does data trust matter

🧪

Worked example. When two systems disagree on a definition and nobody notices

Imagine two teams both talk about “customer”. One means “the bill payer”. Another means “the person who contacted us”. Both are valid. The problem is when a dashboard quietly mixes them and leaders make decisions off the combined number.

This is why I’m strict about schemas and data models. The goal is not bureaucracy. The goal is that two systems can exchange data without silently changing meaning.

⚠️

Common mistakes in standards and interoperability

  • Treating “data” as only a technical asset, not a shared organisational promise.
  • Using the same word for different concepts, then arguing about numbers in meetings.
  • Changing schemas without versioning, then breaking downstream consumers.
  • Building point-to-point integrations for everything because it is quicker today.

🔎

Verification. Can you prove meaning survived the journey

  • Pick one critical field (for example “status”, “completion date”, “meter read type”).
  • Write what it means, what values are allowed, and who can change it.
  • Check it in at least two systems and confirm it matches.
  • If it does not match, write which system is the source of truth and why.

📝

Reflection prompt

What is one term your organisation argues about because it is fuzzy. If you had to define it in a way a computer could enforce, what would you write.


Platforms, journeys and dashboards

Concept block
Platforms and journeys
Journeys are the user view. Platforms are the reuse view. Both must align.
Journeys are the user view. Platforms are the reuse view. Both must align.
Assumptions
Journeys are mapped
Platforms are operable
Failure modes
Platform drift
Dashboard theatre
A keeps digital work consistent. A shows where data and service design must line up. A keeps teams focused on real outcomes.

Dashboards turn journeys into signals. They show where digital journeys break, where demand changes, and where teams need to respond.

Journey and platform view

Journeys run across systems, not inside them.

Touchpoints

Web, app, contact centre, field teams.

Platform services

Identity, payments, notifications, case flow.

Data platform

Shared data models and analytics.

Insight loop

Dashboards guide improvements.

Evidence flow

Journeys improve when teams can see adoption, friction, and drop off together.

Quick check: platforms and journeys

Why use a platform approach

What is a journey

Why are dashboards useful

What is the purpose of a user story

Scenario: A journey crosses five systems and users keep repeating details. What should you change first

What happens when journeys are designed per system

Why link platforms to data

🧪

Worked example. The “simple” journey that exposes three hidden systems

Pick something boring: “change my direct debit”. On paper it is one action. In many organisations it touches identity, billing, notifications, a case system, and maybe a manual approval queue. If those services are not platformed, you end up rebuilding the same flow again and again.

⚠️

Common mistakes in journey thinking

  • Designing journeys inside one system boundary and blaming users when it feels broken.
  • Building dashboards that report outcomes but not the operational signals needed to fix them.
  • Optimising one stage and making the end-to-end journey slower (local optimisation).

🔎

Verification. A minimum dashboard that earns trust

  • Adoption: how many users successfully complete the journey.
  • Friction: drop-off and “contact us” rate by step.
  • Reliability: error rate and latency for the key service calls.
  • Quality: how often humans have to rework the outcome later.

📝

Reflection prompt

What is the one journey your users complain about most. If you could fix only one step of it this quarter, which step would you choose and how would you prove it improved.


Risks, governance and people

Concept block
Risks and governance
Governance works when decisions, enforcement, and evidence are connected.
Governance works when decisions, enforcement, and evidence are connected.
Assumptions
Decision rights exist
Governance is usable
Failure modes
Committees without enforcement
Hidden exceptions

Digitalisation creates new risks: data quality problems, weak ownership, and security gaps. Governance makes those risks visible and managed. People still do the work, so roles and responsibility must be clear.

Digitalisation without governance is speed without control.

Governance view

People, process, data and technology need checks.

People

Named owners, clear decision rights, real accountability.

Process

Approvals, reviews, and change control that match risk.

Data

Quality checks, retention rules, and access boundaries.

Technology

Security controls, logs, and audit trails.

Quick check: risks and governance

Why is governance part of digitalisation

What is a common risk in digital programmes

Why does security matter for digitalisation

Scenario: A digital form goes live and creates permanent manual rework. What governance control was missing

What should be clear in a governance model

Why involve people early

What happens if you move fast without controls

🧪

Worked example. “Fast” delivery that creates permanent rework

A team ships a new digital form quickly. It captures the wrong fields, has no validation, and dumps data into a spreadsheet. Congratulations: the digital part is fast. The operational part is now slower forever.

Governance is how you prevent this. Not by blocking delivery, but by making quality, ownership, and risk visible early enough to fix. My opinion: governance should feel like guardrails, not handcuffs.

⚠️

Common mistakes in governance

  • “Everyone owns it” which means nobody owns it.
  • Governance meetings with no decision rights, no data, and no follow-up.
  • Security treated as a late-stage review instead of a design constraint.
  • Rolling out change without supporting people with training and feedback loops.

🔎

Verification. A minimal governance model that can actually run

  • Named service owner (accountable for outcomes).
  • Named data owner/steward for key datasets and definitions.
  • A change process that matches risk (small changes fast, big changes reviewed).
  • Monitoring and incident response for digital services (who responds, how, when).

🧾

CPD evidence (small, honest, useful)

  • What I studied: drivers of digitalisation, interoperability basics, journey thinking, and governance fundamentals.
  • What I practised: one value map (digitisation vs digitalisation), one journey dashboard sketch, and one maturity check.
  • What changed in my practice: one new habit. Example: “I will write the outcome and the failure cost before I discuss tools.”
  • Evidence artefact: a one-page summary with an outcome, a journey, and the minimum metrics to prove improvement.

Quick feedback

Optional. This helps improve accuracy and usefulness. No accounts required.