Module 55 of 64 · EA Capability and Governance

Skills, roles, and maturity models

50 min read 6 outcomes 1 interactive tool 4 standards cited

This is the fourth of 6 EA Capability and Governance modules. It covers the G18A Architecture Skills Framework in full, including all seven competency categories and the four proficiency levels. It connects skills thinking to the G249 capability maturity assessment, explains why vanity scoring is a trap, and shows how to make maturity work operationally useful through consequence-based diagnosis. No knowledge beyond the preceding three modules in this stage is assumed.

By the end of this module you will be able to:

  • List all seven competency categories from the G18A Architecture Skills Framework and explain what each one covers
  • Explain the four proficiency levels and how they apply across competency categories, including which levels different architecture roles typically need
  • Connect skills assessment to the G249 capability maturity model and explain how maturity assessment should diagnose specific operating weaknesses
  • Distinguish useful maturity work from vanity scoring using the three-question diagnostic test
  • Identify single-point-of-failure risks in architecture team composition and explain why these risks are invisible in maturity scores
  • Apply skills and maturity thinking to London's architecture capability, identifying critical skill needs, fragility risks, and priority maturity improvements
Capability image used here to suggest enterprise architecture skill depth, role clarity, and maturity being evaluated against operational consequences

Real-world case · 2021

Level 2: Developing. Generic recommendations. No operational diagnosis.

A consulting firm delivered a maturity assessment to a mid-sized insurer in 2021. The assessment used a five-level model and placed the organisation at "Level 2: Developing" across all dimensions.

The CTO asked a pointed question: "What specific thing would we do differently tomorrow if we believed this score?" The consulting team could not answer in operational terms. They could say the organisation was "developing" but could not name which specific skill gaps were causing which specific operating problems, or what the smallest intervention would be that would produce a visible improvement.

The score described a position on a scale. It did not diagnose a specific weakness with a specific consequence. The CTO concluded that the assessment had measured something without explaining anything. The enterprise had spent four months and significant consulting fees to learn a number that did not change a single decision.

If a maturity assessment cannot explain what specific thing the enterprise should do differently tomorrow, is the score useful?

That story illustrates the difference between useful maturity work and vanity scoring. This module explains the G18A skills framework, the G249 maturity model, and how to connect both to real operating improvement rather than generic score production.

If you are already confident with architecture skills and maturity assessment, use the knowledge checks to confirm your understanding and move to Module 56: Running TOGAF without bureaucracy.

55.1 The G18A Architecture Skills Framework

G18A is the TOGAF Series Guide that defines the competency categories and proficiency levels for enterprise architects. It provides a structured way to assess what skills the architecture team needs, where gaps exist, and how to prioritise skill development. The framework is not a job description template. It is a diagnostic tool for understanding what the architecture function can and cannot do.

G18A organises skills into seven competency categories. Each category represents a distinct area of capability that an architecture function needs. No single category is sufficient on its own. An architecture function that excels at technical skills but lacks business skills will produce technically sound architectures that executives cannot understand or support.

Category 1: Generic skills

Leadership, team management, interpersonal communication, and the ability to facilitate cross-functional collaboration. These are not technical skills, but they are essential because enterprise architecture work depends on influencing decisions across organisational boundaries. An architect who cannot communicate a trade-off clearly to a non-technical audience will struggle to move the enterprise forward regardless of technical depth.

Category 2: Business skills and methods

Business case development, strategic planning, business process analysis, and the ability to connect architecture decisions to business outcomes. An architect who cannot explain why a technology choice matters in business terms will struggle to influence executive decisions. Business skills also include understanding of financial planning, investment appraisal, and the business context in which the enterprise operates.

Category 3: Enterprise architecture skills

Knowledge of the ADM, the content framework, governance practices, modelling techniques, and the ability to apply TOGAF concepts proportionately. This is the core professional skill set for enterprise architects. It includes understanding of the architecture repository, the Enterprise Continuum, stakeholder management techniques, and the ability to tailor the method to different enterprise contexts.

Category 4: Programme or project management skills

Understanding of programme governance, delivery methods, dependency management, and transition planning. Architects who cannot work within delivery structures produce outputs that delivery teams cannot use. Programme management skills include understanding of how architecture outputs become project inputs (as described in the G188 integration guidance from Module 49) and how architecture reviews integrate with sprint or stage-gate cadence.

Category 5: IT general knowledge

Broad understanding of technology domains including applications, data, infrastructure, security, and operations. This does not mean deep technical expertise in every area, but sufficient knowledge to make informed architecture decisions and to recognise when specialist input is needed. An enterprise architect who does not understand the basics of database design, network architecture, or security principles will make decisions based on incomplete understanding.

Category 6: Technical IT skills

Deeper expertise in specific technology areas relevant to the enterprise context. For a financial services enterprise, this might include payment processing systems, real-time data platforms, and regulatory reporting technology. For London, this would include operational technology (SCADA, network automation), GIS platforms, smart metering, and telecoms infrastructure.

Category 7: Legal environment

Understanding of the regulatory and legal context in which the enterprise operates. For regulated industries, this includes knowledge of the regulatory framework, compliance obligations, data protection requirements, and the relationship between architecture decisions and regulatory submissions. An architect working in a regulated utility who does not understand Ofgem obligations, the ED3 framework, or data publication requirements will produce architectures that are technically sound but regulatorily uninformed.

55.2 The four proficiency levels

G18A defines four proficiency levels that apply across all seven competency categories. Each level describes a different depth of capability. Understanding the levels matters because not every architect needs the same depth in every category. The skill profile should match the role.

Level 1: Background

Basic awareness of the competency area. The person understands the concepts and terminology but relies on others for application. Suitable for team members who interact with architecture but do not perform architecture work directly. For example, a project manager who needs to understand what an architecture contract is and how it affects delivery, but does not need to create one.

Level 2: Awareness

Working knowledge that allows the person to participate in architecture activities under guidance. The person can contribute to reviews, provide input to domain analysis, and apply standard techniques with support. Suitable for junior architects or experienced technical staff transitioning into architecture roles.

Level 3: Knowledge

Competent practitioner who can apply the competency independently in standard situations. The person can lead domain architecture work, conduct compliance reviews, and make informed decisions within their area of expertise. This is the minimum level for independent architecture work.

Level 4: Expert

Deep expertise that allows the person to handle novel and complex situations, mentor others, and shape the enterprise's approach to the competency area. The person can innovate, challenge existing practice, and guide the architecture function's development in this area.

Role-level profiles

Not every architect needs Level 4 in every category. A chief architect typically needs Level 3 or 4 in enterprise architecture skills, business skills, and generic skills, but may need only Level 2 in specific technical IT areas. A domain architect needs Level 3 or 4 in their domain's technical skills but may need only Level 2 in programme management. The value of the framework is in mapping the gap between the skills the team has and the skills the enterprise needs, not in pursuing maximum scores everywhere.

Architecture capability maturity should be assessed against the organisation's ability to sustain and improve its practices over time.

The TOGAF Standard, 10th Edition - C220 Part 5, Capability and Governance

The key word is sustain. Skills assessment is useful when it identifies where the capability is fragile and what intervention would strengthen it. A skills matrix that maps every person to every level is useful only if it connects to an improvement plan with specific actions and specific consequences.

55.3 G249 and operational maturity assessment

G249 (Architecture Capability Assessment) provides a structured framework for evaluating the overall maturity of the architecture function. Where G18A focuses on individual skills, G249 focuses on organisational capability maturity: how well the architecture function operates as a system.

The critical distinction is between maturity assessment that diagnoses and maturity assessment that decorates. The opening story shows the decorative version: a score on a scale that cannot answer an operational question. Useful maturity work connects scores to consequences using three diagnostic questions.

Question 1: What behaviour is weak?

Not a label like "developing" but a specific description. For example: "The decision log is not maintained between Architecture Board meetings, so decisions are not traceable after the meeting that made them." This specificity is what turns a score into a diagnosis.

Question 2: What consequence does this create?

Not a generic risk like "governance is immature" but a specific impact. For example: "Delivery teams cannot find the rationale behind architecture constraints, which leads to rework when teams unknowingly violate decisions that were made but not recorded." The consequence is what makes the weakness matter.

Question 3: What is the smallest intervention that would visibly improve this?

Not a multi-year transformation programme but a concrete action. For example: "Assign a named repository steward and require the decision log to be updated within 24 hours of each board meeting." The smallest-intervention test prevents maturity assessment from becoming an excuse for large, vague improvement programmes that never deliver visible change.

If a maturity assessment cannot answer all three questions for every dimension it scores, the assessment is measuring position without diagnosing weakness. The score becomes decoration rather than diagnosis.

Common misconception

A maturity score on a five-level scale is itself a useful output.

A maturity score is useful only when it can explain what specific behaviour is weak, what consequence that weakness creates, and what small improvement would change the operating reality most noticeably. Without that diagnostic connection, the score is decoration. The opening story shows the CTO's test: 'What would we do differently tomorrow?' If the answer is vague, the assessment has measured without explaining.

Loading interactive component...

55.4 Single-point-of-failure risks in architecture teams

One of the most common and most dangerous capability risks is concentrating critical architecture knowledge in one person. This creates a single point of failure that goes beyond succession planning. When one person holds the knowledge, the capability is fragile regardless of any maturity score.

Single-point-of-failure risks are invisible in most maturity models because the models assess the function as a whole, not the distribution of capability within it. An architecture function can score Level 3 on every dimension and still be one resignation away from losing its entire operating capability.

Common concentration patterns

  • Repository knowledge. If one person holds all repository knowledge, the repository is effectively inaccessible when that person is absent. The governance memory becomes dependent on one person's availability.
  • Board facilitation. If one person runs all Architecture Board meetings, governance stalls during their leave. No other person can chair the meeting with the contextual knowledge needed to make decisions efficiently.
  • Delivery interface. If one person is the sole interface to delivery teams, the delivery interface is fragile. When that person is unavailable, delivery teams either wait (losing pace) or proceed without architecture guidance (losing coherence).
  • Target architecture understanding. If one person understands the complete target architecture across all domains, the architecture is effectively undocumented regardless of what the repository contains. The documentation exists, but only one person can interpret it in context.
  • Domain expertise. If only one person has Level 3 or 4 proficiency in a critical technical domain (for example, OT/IT boundary architecture in a utility), that domain's architecture capability depends entirely on one person's continued availability.

How G18A helps identify these risks

The G18A skills framework helps identify concentration risks by mapping proficiency levels across the team for every competency category. If any category at Level 3 or 4 is held by only one person, the capability has a fragility that no maturity score will reveal on its own. The mitigation is not just succession planning (which addresses the risk of departure) but active knowledge sharing (which addresses the risk of temporary absence and reduces the cognitive load on the person who holds the knowledge).

Capability image used here to suggest enterprise architecture skill depth, role clarity, and maturity being evaluated against operational consequences
Useful maturity work diagnoses specific operating weaknesses and their consequences, not just positions on a generic scale.

55.5 London Grid Distribution: skills and maturity assessment

Applying the G18A framework to London's architecture team identifies specific skill priorities, fragility risks, and maturity improvements that connect directly to operating consequences.

Critical skill needs

  • OT/IT boundary expertise (Technical IT skills, Level 3+). London operates both operational technology (SCADA, network automation) and information technology. Architecture decisions at the OT/IT boundary require specialist knowledge that is scarce. Without this skill, the architecture function cannot make safe decisions about the boundary that separates safety-critical operations from general IT.
  • Regulatory fluency (Legal environment, Level 3). Architecture decisions directly affect regulatory submissions (ED3, LTDS, digitalisation strategy compliance). At least one architect needs working knowledge of Ofgem obligations, the ED3 framework, and data publication requirements. Without this skill, architecture decisions are made without understanding their regulatory consequence.
  • Stakeholder facilitation (Generic skills, Level 3+). London's transformation involves multiple internal and external stakeholders including Ofgem, customer groups, operational staff, and technology partners. The ability to facilitate cross-functional agreement is as important as technical depth.
  • Repository stewardship (EA skills, Level 3). The London repository is the operational memory for governance. At least two people need sufficient skill to maintain it, update the decision log, manage the exception register, and ensure the repository answers real questions faster than asking a senior architect from memory.

Fragility risks

  • If only one person understands the OT/IT boundary architecture, that knowledge is a single point of failure for all safety-related architecture decisions.
  • If only the chief architect can chair the Architecture Board, governance continuity depends on one person's availability.
  • If the repository steward role is treated as a side task rather than a defined responsibility, repository quality will degrade under delivery pressure and the governance memory will become unreliable.

Operational maturity priorities (using the three-question test)

  • Weak behaviour: The decision log is not maintained between board meetings. Consequence: Delivery teams cannot find the rationale behind constraints. Smallest intervention: Assign a named repository steward and require the decision log to be updated within 24 hours of each board meeting.
  • Weak behaviour: Exceptions are recorded but expiry dates are not tracked. Consequence: Waivers accumulate into unmanaged architecture debt. Smallest intervention: Add a quarterly waiver-expiry review to the board agenda.
  • Weak behaviour: Delivery teams must ask the chief architect verbally for architecture rationale. Consequence: Architecture knowledge is accessible only when one person is available. Smallest intervention: Require every board decision to include a one-paragraph rationale that is published in the repository within 24 hours.
Check your understanding

G18A identifies seven competency categories for enterprise architects. Which combination of categories would be most critical for an architect working in a regulated energy utility?

A maturity assessment rates an organisation at Level 3 across all dimensions but cannot explain what specific improvement would move it to Level 4. What does this suggest about the assessment?

An organisation has one enterprise architect who holds all repository knowledge, runs all board meetings, and is the sole interface to delivery teams. What capability risk does this create?

Key takeaways

  • G18A defines seven competency categories (generic, business, enterprise architecture, programme management, IT general knowledge, technical IT, legal environment) and four proficiency levels (background, awareness, knowledge, expert).
  • Skills assessment is useful when it connects gaps to specific operating consequences, not when it produces a complete matrix for its own sake.
  • G249 maturity assessment should diagnose specific weak behaviours and their consequences using the three-question test: what is weak, what consequence does it create, and what is the smallest improvement?
  • Single-point-of-failure risks are among the most dangerous capability fragilities and are invisible in most maturity scores because the models assess the function as a whole, not the distribution of capability within it.
  • The best capability-growth plans are anchored in consequences and smallest-intervention thinking, not generic multi-year improvement programmes.
  • London's critical skill needs centre on OT/IT boundary expertise, regulatory fluency, stakeholder facilitation, and repository stewardship, with fragility risks in each area.

Standards and sources cited in this module

  1. The TOGAF Standard, 10th Edition (C220)

    Part 5, Capability and Governance

    The core standard covering architecture capability and maturity assessment.

  2. G18A, Architecture Skills Framework

    Full guide

    Primary guide defining all seven competency categories and four proficiency levels for enterprise architects.

  3. G184, Leader's Guide to Establishing and Evolving an EA Capability

    Full guide

    Capability development guidance including skills, roles, and maturity as components of architecture capability.

  4. G249, Architecture Capability Assessment

    Full guide

    Assessment framework for architecture capability maturity, including the maturity dimensions and assessment approach.

You now understand skills, roles, and maturity as operational diagnostics rather than vanity scoring. The next question is how to run TOGAF proportionately without turning it into bureaucracy. That is Module 56.

Module 55 of 64 · EA Capability and Governance