Running TOGAF without bureaucracy
This is the fifth of 6 EA Capability and Governance modules. It explains how to run TOGAF proportionately, using the G20F tailoring patterns and the G210 agile sprint guidance. The central argument is that bureaucracy comes from poor implementation, not from the ADM itself, and that proportionate governance means knowing where weight belongs and where it does not. No knowledge beyond the preceding four modules in this stage is assumed.
By the end of this module you will be able to:
- Identify the five most common sources of bureaucratic architecture behaviour and explain why none of them is inherent to TOGAF
- Apply the four-question proportionality test to determine whether a tailored implementation still preserves architecture value
- Describe the five minimum viable TOGAF controls that should survive even the lightest implementation
- Apply the G20F tailoring patterns (iteration, phase selection, artefact selection, governance scaling) to different enterprise contexts
- Explain how G210 integrates ADM work with agile sprint delivery through the architecture runway, sprint-aligned reviews, and decision-point governance
- Use the London case to distinguish proportionate control from unnecessary process weight at three governance tiers

Real-world case · 2020
47 mandatory templates. Delivery teams routed around architecture entirely.
A technology company adopted TOGAF in 2020 after a period of painful integration failures. The intent was sound: establish architectural discipline to prevent the same failures from recurring.
Within eighteen months, the architecture team had created 47 mandatory artefact templates, required Architecture Board approval for every technology decision above a modest threshold, and established a review cadence that added four weeks to every initiative.
Delivery teams learned to avoid the architecture process by labelling their work as "operational improvements" rather than "projects." The architecture function had become so heavy that the enterprise routed around it entirely. The most consequential technology decisions were being made in sprint planning meetings that architecture never attended.
The method was not the problem. The implementation was. The team had confused thoroughness with value and had not asked which artefacts actually improved decisions. The 47 templates existed because the content framework mentioned them, not because anyone had tested whether they changed a delivery choice.
If delivery teams label their work as 'operational improvements' to avoid the architecture process, has governance succeeded or failed?
That story illustrates how disproportionate implementation turns architecture into an obstacle. This module explains how to keep the method useful without making it heavy, using the G20F tailoring patterns, the G210 sprint integration model, and a set of practical tests for proportionate governance.
If you are already confident with proportionate TOGAF implementation, use the knowledge checks to confirm your understanding and move to Module 57: London governance repository and assurance model.
56.1 The five sources of bureaucratic architecture behaviour
TOGAF is often blamed for bureaucracy when the real causes are implementation choices that the standard does not require and explicitly advises against through its tailoring guidance. Five sources account for most bureaucratic architecture behaviour.
- Unclear decision rights. If nobody knows what architecture is allowed to decide, every review becomes a negotiation. The Architecture Board reviews everything because it trusts nothing to be handled elsewhere. The solution is not more governance but clearer governance: a decision rights matrix that specifies which decisions need board review, which the domain architect can decide, and which belong to the delivery team.
- Over-centralised review. All decisions flow through one forum regardless of their consequence level. Minor choices queue alongside major ones, creating bottlenecks that affect sprint cadence and delivery morale. The solution is tiered governance: different weights for different decisions, with escalation reserved for decisions that genuinely cross the board threshold.
- Artefact production without decision purpose. Templates exist because the content framework mentions them, not because someone needs them for a decision or handoff. Documents multiply without anyone asking whether they change a delivery choice. The solution is the purpose test: every artefact should serve an identified decision or handoff. If it does not, it should not be produced.
- Fear-driven governance. The architecture function reviews everything because past failures have created a culture of defensive oversight. The response to a governance gap is always more governance, never smarter governance. The solution is to address the specific failure that caused the fear with a specific control, not to add blanket oversight.
- No tailoring discipline. The enterprise keeps every activity from the largest possible interpretation of the ADM, even when the problem does not justify that weight. This is the source the opening story illustrates: the architecture team treated the content framework as a mandatory checklist rather than a menu. The solution is explicit tailoring using the G20F patterns.
None of these causes is inherent to TOGAF. All of them are implementation choices that the standard explicitly advises against. C220 Part 3 states that the ADM is intended to be used flexibly and tailored. The permission to tailor is built into the method. The enterprise must exercise that permission deliberately.
“The ADM is intended to be used flexibly and tailored to the needs of the organisation.”
The TOGAF Standard, 10th Edition - C220 Part 3, Applying the ADM
The key word is tailored. TOGAF expects adaptation. A rigid, heavyweight implementation is a local choice, not a method requirement. The standard gives permission to tailor, but the enterprise must exercise that permission deliberately rather than defaulting to maximum weight.
56.2 The four-question proportionality test
A lightweight TOGAF implementation is still good if the enterprise can answer four questions positively. These questions test whether the essential architecture value survives the tailoring. If all four answers are positive, the implementation is proportionate. If any answer is negative, the tailoring has removed too much.
- Can the enterprise explain its architecture decisions to a sceptical stakeholder? This tests whether decision traceability survives. If nobody can explain why a target state was chosen, the architecture has no explanatory power. The minimum requirement is a decision log that records significant decisions, their rationale, and their status.
- Can it control material exceptions with visibility and follow-on discipline? This tests whether the waiver process from Module 54 works. If deviations accumulate invisibly with no expiry conditions and no compensating controls, the architecture is fictional. The minimum requirement is an exception register with the six waiver elements.
- Can it hold a cross-domain picture of change that delivery teams recognise as useful? This tests whether the repository serves delivery. If delivery teams do not use architecture outputs, the cross-domain picture exists only for the architecture team. The minimum requirement is a target-state view that delivery teams reference when making design decisions.
- Can the repository answer a real question faster than asking a senior architect from memory? This tests whether the repository is operational. If the fastest way to get architecture information is to find a specific person, the repository is decorative. The minimum requirement is that a delivery lead can find the rationale behind a specific constraint within five minutes using the repository alone.
If those four abilities survive the tailoring, the implementation is proportionate. If they disappear, the enterprise has stripped the method past the point of usefulness. Conversely, if the enterprise has all four abilities plus significant additional process that does not improve any of them, the additional process is bureaucracy.
56.3 Minimum viable TOGAF controls
Even the lightest TOGAF implementation should retain five control points that protect architecture value. These five controls are the floor, not the ceiling. A regulated enterprise like London will need more. A small product team might need only these. The point is that removing any of them erodes the architecture's ability to explain and govern itself.
- Scope statement. A clear, short description of the enterprise problem, architecture boundary, and decision horizon. Without this, nobody can judge whether the architecture is relevant or whether a specific decision falls inside or outside its scope. The scope statement should be one page, not a document.
- Active principles. A small set of architecture principles (typically 8 to 15) that are specific enough to constrain real decisions and reviewed regularly enough to stay current. Principles that cannot be used to settle a design disagreement are too vague. Principles that have not been reviewed in over a year are likely stale.
- Decision log. A record of significant architecture decisions, their rationale, and their status. The log does not need to be elaborate. It needs to be findable and maintained. The test is whether a delivery lead who was not in the room can understand why a constraint exists.
- Exception register. A record of active deviations, their compensating controls, and their expiry conditions. This is what prevents architecture debt from accumulating invisibly. Without it, the enterprise has no mechanism to distinguish between governed exceptions and ungoverned drift.
- Lightweight review path. A defined way for material exceptions to reach a decision forum, even if that forum is a weekly 30-minute architecture review rather than a formal board. The review path should have a defined threshold (what triggers escalation) and a defined response time (how quickly the forum will decide).
Common misconception
“TOGAF inherently requires heavy process and bureaucracy.”
Bureaucracy comes from poor implementation, not from the TOGAF method itself. The ADM is designed to be tailored. C220 Part 3 explicitly says the method should be used 'flexibly and tailored'. A proportionate implementation keeps decision traceability, exception visibility, and cross-domain coherence while removing unnecessary weight. The five minimum controls are the floor that preserves architecture value.
56.4 G20F tailoring patterns in detail
G20F provides TOGAF-endorsed patterns for tailoring the ADM to different enterprise contexts. The guide recognises that one-size-fits-all application of the full ADM creates the exact overhead problems described in the opening story. G20F offers four tailoring strategies, each of which can be applied independently or in combination.
Iteration and levels
G20F supports iterating through ADM phases at different levels of detail. A strategic-level pass may touch all phases lightly to establish direction. A segment-level pass may focus on specific domains in depth. A capability-level pass may address a single bounded problem within an existing target architecture. Each level adjusts the depth of analysis and the weight of governance without abandoning the method's structure. The enterprise chooses the level that matches the problem, not the level that matches a standard operating procedure.
Phase selection
Not every initiative requires every ADM phase. A narrow change within an existing target architecture may start at Phase E (migration planning) rather than repeating Phases A through D. An initiative that affects only the technology domain may focus on Phase D without revisiting the business or information systems architecture. G20F makes this explicit: the enterprise should enter the ADM at the point that matches the problem.
Artefact selection
G20F supports producing only those artefacts that serve a decision or handoff purpose. The content framework is a menu, not a mandatory checklist. If an artefact does not change a delivery decision or feed a governance review, it should not be produced. This single principle eliminates more unnecessary weight than any other tailoring choice. The 47-template problem from the opening story is a direct consequence of treating the content framework as mandatory.
Governance scaling
G20F supports scaling governance from a full Architecture Board with formal compliance reviews to a lightweight review and escalation path, depending on enterprise risk and scope. A large, regulated enterprise may need full board governance for cross-domain decisions and lighter governance for bounded changes. A small enterprise may need only a weekly review meeting with an escalation path for material exceptions.
56.5 G210: TOGAF with agile sprint delivery
G210 (Applying the TOGAF ADM using Agile Sprints) addresses the common objection that TOGAF is incompatible with agile delivery. Module 49 covered the G210 sprint integration model in the context of delivery cadence. Here the focus is on how G210 supports proportionate governance by changing the cadence of architecture work without weakening its substance.
Architecture runway
Architecture work runs ahead of delivery sprints, producing just-enough architecture to guide the next sprint cycle. The architecture team maintains a runway of decisions and views that delivery teams can consume. The runway prevents the waterfall trap (define everything before delivery starts) without creating the divergence trap (sprint into territory with no architectural guidance).
Sprint-aligned reviews
Architecture reviews happen at sprint boundaries rather than in separate governance cycles. This integrates architecture into delivery cadence rather than creating a parallel process that adds calendar time. The review at each sprint boundary asks: did the sprint stay within guardrails, and has the sprint produced any learning that should update the architecture?
Decision-point governance
Rather than reviewing every sprint output, governance focuses on decisions that cross the board-level threshold. Sprint-level decisions that stay within published guardrails do not need board review. Decisions that affect cross-domain boundaries, change the target state, or introduce a material deviation do need review. This is proportionate governance in action: weight where it matters, lightness where it does not.
G210 does not dilute TOGAF. It adapts the cadence while preserving the method's core value: decision traceability, cross-domain coherence, and governed exceptions. The architecture function stays relevant because it operates at delivery speed rather than at its own separate speed.
“G210 provides guidance on applying the TOGAF ADM within an agile delivery context, ensuring architecture and delivery operate in coordinated cadence.”
The TOGAF Standard, 10th Edition - G210, Applying the TOGAF ADM using Agile Sprints
The guide does not reduce TOGAF to agile ceremonies. It shows how the ADM's substance (decision traceability, cross-domain coherence, governed exceptions) can be delivered at sprint speed through the architecture runway and decision-point governance.
56.6 London Grid Distribution: proportionate governance at three tiers
London is a relatively strong-fit TOGAF case because the change is wide, regulated, and dependent on good governance. Even here, proportion matters. London should not attempt maximum TOGAF everywhere. The discipline lies in assigning the right weight to the right decision, not in applying maximum weight everywhere or minimum weight everywhere.
Tier 1: Full governance weight
Applies to cross-domain decisions, target-state changes, major exceptions, and regulatory milestones. These earn the cost of formal board review, documented architecture contracts, and structured G21H compliance assessment. Examples in London include:
- Changes to the OT/IT boundary architecture that affect operational safety.
- Changes to the information authority model that affect regulatory publication.
- New cross-domain integration patterns that affect multiple work packages.
- Material exceptions to architecture principles with regulatory consequence.
Tier 2: Lighter governance
Applies to domain-specific design choices within the approved target state, technology configuration within approved platforms, and sprint-level decisions that stay within published guardrails. These need domain architect review and recording in the decision log, but not formal board review. Examples in London include:
- Internal design of a connection-offer calculation service within the approved integration pattern.
- Configuration changes to the publication pipeline within approved data standards.
- Local workflow optimisation within an operational area that does not change cross-domain boundaries.
Tier 3: Minimal governance
Applies to operational fixes, minor cosmetic changes, and decisions entirely within one bounded service that do not affect cross-domain boundaries, data authority, or regulatory compliance. These need only standard change management, not architecture review. Examples in London include:
- Bug fixes within a single service that do not change the data model or integration pattern.
- User interface refinements that do not change the underlying business process or data authority rules.
- Performance tuning within an approved technology platform.
The four-question test applied to London
- Decision traceability: Can the Architecture Board explain to Ofgem why a specific technology choice was made? If yes, the decision log is working.
- Exception visibility: Are all waivers (such as the API gateway waiver from Module 54) recorded with expiry dates and compensating controls? If yes, the exception register is working.
- Cross-domain picture: Do delivery teams reference the target-state views when making design decisions? If yes, the repository serves delivery.
- Repository operationality: Can a delivery lead find the rationale behind the OT/IT boundary constraint without asking the chief architect? If yes, the repository is operational.
An architecture team requires 47 mandatory artefact templates for every initiative. Delivery teams have started labelling work as 'operational improvements' to avoid the process. What does this indicate?
After tailoring, an organisation has no decision log, no exception register, and no cross-domain architecture view. Has the tailoring gone too far?
G20F supports entering the ADM at Phase E for a narrow change within an existing target architecture rather than repeating Phases A through D. What principle does this reflect?
London applies three governance tiers. A delivery team changes the integration pattern for the connection-offer calculation engine, affecting two work packages. Which tier should apply?
Key takeaways
- Bureaucracy comes from five implementation choices (unclear decision rights, over-centralised review, purposeless artefacts, fear-driven governance, no tailoring), not from the TOGAF method itself.
- The four-question proportionality test checks whether architecture value survives tailoring: decision traceability, exception visibility, cross-domain picture usefulness, and repository operationality.
- Five minimum controls should survive even the lightest implementation: scope statement, active principles, decision log, exception register, and a lightweight review path.
- G20F provides four endorsed tailoring patterns: iteration and levels, phase selection, artefact selection, and governance scaling.
- G210 shows how TOGAF integrates with agile delivery through the architecture runway, sprint-aligned reviews, and decision-point governance.
- London uses three governance tiers (full weight, lighter, minimal) to assign the right governance cost to the right decision, not maximum weight everywhere.
Standards and sources cited in this module
The TOGAF Standard, 10th Edition (C220)
Part 3, Applying the ADM, and Part 5, Governance
The core standard covering ADM adaptation, tailoring, and proportionate governance.
G20F, Applying the TOGAF ADM: Iteration and Levels
Full guide
Endorsed tailoring patterns for adapting ADM depth and cadence to different enterprise contexts.
G210, Applying the TOGAF ADM using Agile Sprints
Full guide
Guide to running ADM work with agile sprint structures through architecture runway and sprint-aligned reviews.
G186, Practitioners' Approach to Developing Enterprise Architecture Following the TOGAF ADM
Full guide
Practical guide to proportionate ADM application and tailoring for different enterprise sizes and contexts.
You now understand how to keep TOGAF proportionate without gutting the method. The final module of this stage brings all of Stage 7 together in the London governance repository and assurance model. That is Module 57.
Module 56 of 64 · EA Capability and Governance
