Module 62 of 64 · Comparison and Capstone

Common misconceptions and where TOGAF is inappropriate

60 min read 6 outcomes 1 interactive diagram 3 standards cited

This is the fifth of 7 Comparison, Tailoring, and Capstone modules. After four honest framework comparisons, this module turns inward. It addresses the most persistent misconceptions about TOGAF, distinguishes bad fit from bad implementation, identifies where TOGAF is genuinely the wrong tool, and explains what actually creates the bureaucracy that critics rightly complain about. This module takes the G21B business-leader perspective seriously: if architecture cannot explain its value in business terms, it has a problem regardless of which framework it uses.

By the end of this module you will be able to:

  • Identify the most persistent misconceptions about TOGAF and explain why each one misrepresents the standard
  • Distinguish bad fit from bad implementation using practical diagnostic tests
  • Name at least five scenarios where TOGAF is genuinely inappropriate rather than merely unpopular
  • Explain the G21B business-leader perspective on architecture value and how it applies to TOGAF adoption decisions
  • Recognise how bureaucracy grows when architecture is detached from delivery and judgement
  • Apply diagnostic honesty to the London case, distinguishing where the method earns its place from where lighter approaches would serve equally well
Change and risk image used here to suggest TOGAF fit decisions, implementation failures, and the difference between justified method and unnecessary overhead

Real-world case

Dozens of artefacts. Delivery got slower.

The most interesting criticism of TOGAF I have heard came from a delivery lead who said, "We adopted TOGAF, created dozens of artefacts, and delivery got slower. Nobody can tell me what improved."

That is a fair complaint. It is also a complaint that needs careful diagnosis, because it could mean TOGAF was the wrong choice for the problem, or it could mean TOGAF was applied without the tailoring, governance, and delivery connection that make it useful. Those are different failures with different remedies.

The delivery lead's next question was even more revealing: "If I removed all the architecture artefacts from this programme, which decisions would actually get worse?" Nobody in the room could answer confidently. That silence is the diagnostic. If removing the architecture work would not visibly change any enterprise decision, the work is not earning its place.

If a delivery lead says 'We adopted TOGAF, created dozens of artefacts, and delivery got slower,' is the problem the standard or the implementation?

62.1 Misconceptions worth clearing away

Misconception 1: TOGAF means one huge document set

TOGAF does not require every artefact at maximum depth for every problem. The standard explicitly expects tailoring. The ADM is designed to be adapted, and the Series Guides (particularly G186 on practitioners' approach and G210 on agile sprints) specifically address how to run a proportionate architecture practice. An enterprise that produces 200 artefacts for a small bounded change has failed at tailoring, not at TOGAF.

Misconception 2: TOGAF guarantees good architecture

The standard gives structure and guidance. It does not substitute for business understanding, design judgement, or useful governance behaviour. An enterprise can follow every ADM phase dutifully and still produce weak architecture if the team lacks domain knowledge, stakeholder engagement skills, or the ability to make proportionate design trade-offs. TOGAF provides the scaffolding. The quality of the building still depends on the builders.

Misconception 3: TOGAF is only for very large enterprises

Large enterprises often need it most because cross-domain coherence, governed transitions, and repository discipline become critical at scale. But smaller contexts can still use a proportionate subset where cross-domain coherence matters. A medium-sized organisation with a complex technology landscape, regulatory obligations, and multiple delivery teams may genuinely benefit from lightweight TOGAF practice even if it has fewer than 500 employees.

Misconception 4: If TOGAF feels bureaucratic, the standard is always to blame

Sometimes the criticism is valid. Sometimes the standard genuinely does not fit the problem. But often the real cause is poor tailoring, weak decision rights, or governance that reviews everything and clarifies little. Diagnosing the actual cause matters because the remedies are different. If the standard does not fit, use something else. If the implementation is poor, fix the implementation.

Misconception 5: TOGAF certification proves competence

TOGAF certification demonstrates knowledge of the standard. It does not demonstrate the ability to apply it well. Good architecture practice requires domain knowledge,stakeholder engagement skills, design judgement, and the ability to make proportionate trade-offs under real constraints. Certification is a necessary starting point for professional practice. It is not sufficient proof of practitioner quality.

Misconception 6: TOGAF is a waterfall methodology

The ADM is iterative by design. Part 3 of the TOGAF Standard specifically addresses iteration and levels of architecture. The G210 Series Guide explains how to run ADM work with agile sprint structures. The perception that TOGAF is waterfall usually comes from implementations that treated the ADM as a linear sequence rather than an iterative and adaptable method.

Loading interactive component...

62.2 Bad fit versus bad implementation

This distinction is central. Many angry stories about TOGAF are really stories about architecture teams that expanded their review scope without improving decision quality. That is an implementation failure, not proof that the standard itself can never be useful.

The reverse is also true. Sometimes an organisation uses TOGAF language to dignify work that never needed a full enterprise method in the first place. That is a fit failure. Both failures are real. Both require different remedies.

Diagnostic tests for bad implementation

  • The decision test. If you removed the architecture work from this programme, which enterprise decisions would get worse? If the answer is unclear, the architecture is not connected to decisions.
  • The artefact test. For each artefact produced, who uses it and for what decision? If an artefact exists because the framework mentions it rather than because somebody needs it, the tailoring has failed.
  • The governance test. Does the Architecture Board or governance body change any outcomes? If every request is approved and no exceptions are genuinely governed, the governance is ceremonial.
  • The delivery test. Does architecture visibly change release conditions, roadmap logic, or delivery sequencing? If architecture operates at a distance from delivery, it becomes advisory rather than influential.

Diagnostic tests for bad fit

  • The scope test. Does the problem genuinely span multiple domains (business, information, technology, governance)? If the problem is bounded within one domain, a full enterprise method may be disproportionate.
  • The transition test. Does the change involve meaningful transition states that need sequencing? If the change is a single step with no intermediate states, Phase E and Phase F logic adds overhead without proportionate value.
  • The governance test. Does the enterprise genuinely need formal decision rights, exception handling, and compliance reviews? If the context is small enough that everybody knows what is happening, formal governance may not earn its cost.
  • The coordination test. Are multiple teams making interdependent decisions that need a shared architecture? If one team is making all the decisions, the coordination overhead of a full enterprise method is unnecessary.

A useful diagnostic question: if you removed the TOGAF-labelled work from this programme, which enterprise decisions would get worse? If the answer is unclear, the problem might be fit, implementation, or both.

Course observation - Diagnostic test for TOGAF value

This question separates genuine architecture value from ceremonial overhead. If removing the architecture work would not change any decisions, the work is not earning its place.

62.3 When TOGAF is a weak or inappropriate choice

This section is deliberately honest. A course that teaches a standard without identifying where that standard is the wrong tool is a sales document, not an education resource. These are genuine scenarios where TOGAF does not fit.

1. A tightly bounded local change

A change that affects one team, one system, one bounded context, and has little cross-domain consequence does not need an enterprise architecture method. Running a full ADM cycle for a single-team internal tool migration creates overhead without proportionate benefit. The right approach is a lightweight design review within the team's own delivery process.

2. An urgent incident or tactical recovery

When the enterprise is responding to a security breach, a critical system failure, or a time-critical regulatory demand, immediate operational control matters more than architecture formalism. Architecture should have anticipated the resilience requirements before the incident. Once the incident is happening, the priority is operational response, not ADM phases.

3. A purely technical design problem inside one team

If the problem is software architecture depth (microservice boundaries, API design, database schema, performance optimisation) inside a single product team, the need is software architecture, not enterprise architecture. TOGAF addresses enterprise-level concerns. It does not replace software architecture practices like domain-driven design, twelve-factor app patterns, or event-driven architecture within a bounded context.

4. An organisation that refuses governance visibility

TOGAF depends on some level of governance visibility: decisions are recorded, exceptions are handled, and compliance is reviewed. If the organisation's culture actively resists any form of architecture governance, introducing TOGAF will create friction without influence. The prerequisite for useful TOGAF practice is an organisation that accepts, at minimum, that cross-domain decisions benefit from visible architecture input.

5. When only a modelling language or classification aid is needed

If the enterprise needs a modelling language (ArchiMate) or a classification schema (Zachman) rather than a broad architecture-development method, TOGAF adds unnecessary process overhead. Module 58 and Module 61 covered this distinction. Do not use the full enterprise method when the real need is a more focused tool.

6. A very early-stage startup

A startup with twelve engineers building a first product does not need enterprise architecture. It needs product-market fit, software architecture decisions, and fast iteration. Introducing TOGAF governance, repository discipline, and formal ADM phases at this stage creates overhead that is disproportionate to the enterprise's size, speed, and decision complexity.

7. When architecture is used to protect hierarchy rather than improve decisions

This is the most politically uncomfortable scenario. If TOGAF is being used to create a gate-keeping function that slows delivery without improving decision quality, the framework is serving hierarchy rather than architecture. The remedy is not more TOGAF. It is honest diagnosis of whether the architecture function is earning its place or protecting its position.

Common misconception

TOGAF is always the right tool if the enterprise is large enough.

Forcing TOGAF onto a problem that does not need it creates the exact bureaucracy that critics rightly complain about. Framework discipline includes knowing when the framework is not the right tool. Size is one factor. Cross-domain complexity, transition-state logic, and governance needs are the real determinants.

62.4 What actually creates bureaucracy

Bureaucracy in architecture is not caused by having a method. It is caused by specific implementation patterns that detach the method from its purpose. Understanding these patterns helps practitioners prevent them.

Unclear decision rights

If nobody knows what architecture is allowed to decide, every review becomes a negotiation instead of a disciplined control point. The Architecture Board exists to make decisions, not to endlessly discuss them. When decision rights are unclear, governance expands to fill the ambiguity and creates the perception of bureaucracy even when the original intention was legitimate.

Artefacts with no decision purpose

Documents multiply when teams create outputs because the framework mentions them rather than because somebody needs them for a decision or handoff. Every artefact in the repository should be able to answer: who uses this, and for what decision? If the answer is "nobody" or "we created it because the ADM phase lists it," the artefact is not earning its maintenance cost.

Architecture outside delivery

If architecture comments from a distance and rarely changes release conditions,roadmap logic, or exceptions, it becomes ceremonial. Architecture that does not visibly influence delivery decisions is architecture that delivery teams will resent and eventually route around. The remedy is not more governance. It is closer connection between architecture decisions and delivery outcomes.

No tailoring discipline

The enterprise keeps every activity from the largest possible interpretation of TOGAF, even when the problem does not justify that weight. This is the most common source of bureaucracy complaints. The ADM at full depth is designed for large, complex, cross-domain enterprise change. Using full depth for every problem, regardless of size or risk, creates disproportionate overhead.

Review bottlenecks without value

When every change, regardless of size or risk, must pass through the same architecture review process, the review becomes a bottleneck. Proportionate governance means high-risk, cross-domain changes receive detailed review, while low-risk, bounded changes receive a lighter touch. If the governance process treats all changes equally, it creates delay without proportionate value.

62.5 The G21B business-leader perspective

The TOGAF Series Guide G21B addresses architecture from the business-leader perspective. This guide is relevant to the misconceptions discussion because it reframes the question. A business leader does not ask "is TOGAF the right framework?" A business leader asks "is architecture earning its place?"

From the G21B perspective, the tests for whether architecture is earning its place are business tests, not framework tests:

  • Does architecture help the enterprise make better investment decisions?
  • Does architecture reduce the risk of cross-domain conflict and rework?
  • Does architecture improve the enterprise's ability to respond to change?
  • Does architecture make compliance and regulatory evidence more traceable?
  • Does architecture accelerate or decelerate delivery?

If the answer to most of these is negative, the problem is real regardless of which framework is in use. The G21B perspective forces practitioners to justify architecture in business terms rather than in framework terms. That is a healthier starting point for any misconception discussion.

G21B's key recommendations for engaging executives

G21B does not ask business leaders to learn TOGAF. It asks architecture practitioners to translate their work into the language business leaders already use. The guide provides several specific recommendations.

Architecture as decision support, not documentation. Business leaders do not need to read architecture documents. They need architecture to improve the decisions they are already making: investment prioritisation, risk acceptance, programme sequencing, and technology direction. If the architecture function cannot point to specific decisions it has improved, it has a communication problem at best and a value problem at worst.

Present options with trade-offs, not recommendations in isolation. Executives are trained to evaluate options. An architecture team that presents a single recommendation without showing the alternatives and their trade-offs is asking for trust rather than enabling judgement. G21B recommends that architecture outputs for executive audiences always include at least two viable options with clearly stated trade-offs in business terms (cost, risk, speed, flexibility).

Speak in outcomes, not in method. An executive does not need to hear that the architecture team completed Phase B and produced a capability map. The executive needs to hear that the analysis identified three capabilities where the enterprise is duplicating investment and two where a gap creates regulatory risk. The method is the means. The business insight is the output.

Make risk concrete. Abstract risk statements ("there is strategic risk in our technology landscape") do not prompt executive action. Concrete risk statements do: "if we proceed with both platforms, we will spend an additional two million pounds per year on integration and still have inconsistent customer data across the two channels." G21B encourages architects to quantify risk in terms executives already use: cost, time, regulatory exposure, and customer impact.

What the board needs to see versus what the architecture team produces

There is a persistent gap between what architecture teams produce and what boards actually need. G21B addresses this gap directly.

  • The board needs: investment decision support, risk visibility, programme sequencing logic, and confidence that cross-domain dependencies are managed.
  • The architecture team often produces: detailed domain models, artefact catalogues, phase-by-phase outputs, and compliance matrices.
  • The translation: every architecture output should have a one-page executive summary that answers three questions: what decision does this support, what are the options, and what is the risk of each option?

If the architecture team cannot produce that one-page translation, the work may be technically sound but organisationally invisible. G21B's central message is that invisible architecture is architecture that does not influence decisions, regardless of its quality.

A business leader does not care whether the architecture team follows TOGAF, BIZBOK, DoDAF, or Zachman. The business leader cares whether architecture helps the enterprise make better decisions, reduce risk, and respond to change. If it does not, the framework choice is irrelevant.

Course observation derived from G21B perspective - Business-leader architecture test

This reframes the entire misconceptions discussion. The question is not whether TOGAF is good. The question is whether architecture is earning its place.

62.5b London Grid: what the board needs versus what the architecture team produces

The London case makes this distinction concrete. The architecture team produces detailed domain models across OT, IT, data, telecom, and governance. The board needs to understand three things:

  1. Investment sequencing. Which programmes should start first, which depend on others, and what happens if the sequence changes? The board does not need the architecture roadmap at full detail. It needs a summary that shows the critical path, the dependencies, and the cost and risk implications of resequencing.
  2. Cross-domain risk. Where do the most dangerous dependencies sit? The telecom single-carrier concentration, the OT/IT boundary, and the publication data-integrity chain are all risks that the board should know about, expressed in terms of regulatory penalty, service disruption, and cost of remediation.
  3. Regulatory confidence. Can the enterprise demonstrate to Ofgem and the NCSC that the architecture supports compliance? The board needs a yes-or-no answer with evidence, not a walkthrough of every Phase D artefact.

The architecture team's job is to do the detailed work and then translate it into these three board-level outputs. If the board has to read the full architecture to understand the enterprise's position, the translation has failed.

62.6 A fair criticism is still valuable

TOGAF deserves criticism when it is used to protect hierarchy, slow delivery without improving quality, or manufacture the appearance of control. Those are real risks that practitioners must take seriously.

The right response is not framework loyalty. It is diagnostic honesty. Ask whether the problem needed enterprise architecture, whether the tailoring was proportionate, whether the governance improved decisions enough to earn its cost, and whether the architecture function is connected to delivery in a way that visibly influences outcomes.

Fair criticism improves architecture practice. Blind loyalty does not. A mature architecture function welcomes honest feedback about its impact because that feedback is the input for continuous improvement of the practice itself.

London Grid Distribution

London is a strong TOGAF fit because the problem is cross-domain, regulated, multi-year, and governance-heavy. It needs business, information, technology, roadmap, and assurance thinking to stay coherent. The diagnostic tests point clearly toward an enterprise method:

  • Scope test. The problem spans business (connections reform, customer service), information (data publication, metadata), technology (OT/IT, telecoms, cyber), and governance (Ofgem, NIS, CAF). Cross-domain method is justified.
  • Transition test. London's roadmap involves multiple transition states over several years. Phase E and Phase F logic earns its keep.
  • Governance test. Regulatory obligations, compliance frameworks, and external reporting requirements all require governed architecture practice with explicit exception handling.
  • Coordination test. Multiple teams (network operations, IT, data, customer services, compliance) make interdependent decisions that need a shared architecture.

That does not mean every London sub-problem deserves full architecture weight. A narrow operational fix inside one bounded service may need a lighter pattern. A configuration change to a single application does not need an ADM cycle. The course therefore uses London both as a strong-fit example and as a reminder that not every decision inside the enterprise inherits the full method.

  • A good fit at enterprise level does not make TOGAF the right answer for every small delivery decision.
  • The discipline lies in using the method where it earns its keep and standing back where it does not.
Change and risk image used here to suggest TOGAF fit decisions, implementation failures, and the difference between justified method and unnecessary overhead
Fair criticism improves architecture practice. Blind loyalty does not.
Check your understanding

An enterprise applies TOGAF but produces 200 artefacts that nobody uses for decisions. Delivery teams complain the process is bureaucratic. Which diagnosis is most accurate?

A startup with twelve engineers asks whether it should adopt TOGAF for a new product build. What is the best advice?

A business leader asks: 'Is our architecture practice earning its place?' The architecture team responds by describing their TOGAF certification levels and the number of artefacts in the repository. Is this a good answer?

Key takeaways

  • Many common criticisms of TOGAF are really criticisms of bad tailoring and weak governance behaviour, not of the standard itself.
  • Some contexts are genuinely too small, too urgent, too bounded, or too technically focused for TOGAF to be the right tool. Framework discipline includes knowing when the framework does not fit.
  • The core question is whether the enterprise needs cross-domain method, transition logic, and governed change. If it does, TOGAF can earn its place. If it does not, use something lighter.
  • Bureaucracy grows from unclear decision rights, artefacts without purpose, architecture disconnected from delivery, and absent tailoring discipline.
  • The G21B business-leader perspective reframes the question: is architecture earning its place in business terms? Architecture should present options with trade-offs, speak in outcomes rather than method, and make risk concrete in cost, time, and regulatory terms. If it cannot, the framework choice is irrelevant.
  • Fair criticism improves architecture practice. Blind loyalty does not. A mature architecture function welcomes honest feedback.

Standards and sources cited in this module

  1. The TOGAF Standard, 10th Edition (C220)

    Parts 0-5

    The core standard. Understanding its actual scope is essential to diagnosing whether criticism targets the standard or the implementation.

  2. G186, Practitioners' Approach to Developing Enterprise Architecture Following the TOGAF ADM

    Full guide

    Practical guide to working through the ADM, including proportionate application.

  3. G210, Applying the TOGAF ADM using Agile Sprints

    Full guide

    Guide to running ADM work with agile sprint structures, relevant to the misconception that TOGAF is inherently waterfall.

  4. G21B, Enterprise Architecture for Business Leaders

    Full guide

    The business-leader perspective on architecture value. Referenced in Section 62.5 for the 'architecture as decision support' framing and executive engagement recommendations.

You now understand the difference between bad fit and bad implementation, where TOGAF genuinely should not be used, and what actually creates bureaucracy. The next module turns that understanding into practical tailoring: how to design a proportionate architecture operating model for real enterprises of different sizes, speeds, and risk profiles. That is Module 63.

Module 62 of 64 · Comparison and Capstone