Module 39 of 64 · Technology Architecture

Microservices and when not to use them

60 min read 7 outcomes 5 standards cited

This is the fourth of 8 Technology Architecture modules. Microservices are often introduced as a modern engineering pattern, but their real consequences are broader. This module treats the decomposition decision as an enterprise-architecture question about domain clarity, deployment pressure, and operating maturity, and provides a structured decision framework for when NOT to use microservices.

By the end of this module you will be able to:

  • Explain the architectural consequences of microservices beyond code organisation, including team structure, observability, resilience, and governance
  • Describe the operating capabilities and domain conditions that make microservices more or less suitable
  • Apply the four-dimension decomposition assessment to a real technology architecture decision
  • Recognise when a modular monolith is the better architectural choice and explain why in enterprise terms
  • Describe the monolith-first strategy and when it applies
  • Explain how TOGAF handles emerging technologies (G21D) through the ADM lens rather than creating separate innovation tracks
  • Relate service-boundary decisions to London planning, publication, and telemetry responsibilities
Structured systems visual used here to suggest service decomposition decisions being judged against operating reality rather than fashion

Real-world case · 2024

Forty-two services. Nine platform engineers. Slower than the monolith.

A UK transport authority decided in early 2024 to rebuild its passenger-information system as microservices. The business case cited faster delivery, independent scaling, and modern engineering practice.

Eighteen months later, the system had forty-two services, a dedicated platform team of nine, and a release coordination overhead that the original monolith had never required. Response times for the most common passenger query had increased by 30 per cent because of inter-service latency.

The engineering director asked the architecture team a question that should have been asked at the start: "Did we actually need independent deployment boundaries for forty-two things that always change together?" The answer was no. The domain boundaries were unclear, the deployment pressure was absent, and the operating maturity was insufficient. The architecture had been chosen for its modernity rather than its fit.

If forty-two services always change together on the same release cycle, did the enterprise benefit from independent deployment boundaries or just create coordination overhead?

That case is useful because it demonstrates how a sound engineering pattern can become an enterprise liability when adopted without the conditions that justify it. This module explains when microservices help, when they do not, the structured assessment that should precede the commitment, and why a monolith-first strategy is often the wiser starting point.

39.1 Why microservices are an enterprise question

Microservices are often introduced as a modern engineering pattern, but their real consequences are broader. They affect team structure, deployment discipline, observability, resilience, support burden, platform needs, and how quickly change can spread or be contained across the estate. That is why this topic belongs in enterprise architecture.

The TOGAF Series Guide G21I addresses microservices explicitly. It positions the decomposition decision as an architecture concern that must be evaluated against enterprise conditions, not just engineering preferences. The guide makes clear that the pattern is not inherently good or bad. Its value depends entirely on whether the enterprise can sustain the operating model it requires.

The question is not "are microservices modern?" It is "does this enterprise benefit from this level of decomposition, and can it pay the operating cost?" Those are enterprise questions, not engineering questions.

The architectural value of microservices depends not on the pattern itself but on whether the enterprise has the domain clarity, deployment need, and operating maturity to sustain it.

Working definition derived from G21I microservices guidance - G21I, Using the TOGAF Standard with the TOGAF Series Guide for Microservices

The pattern is not wrong. It is expensive when adopted without the conditions that justify it. Enterprise architecture should test those conditions before committing.

39.2 When the pattern can help

Microservices provide genuine architectural value under specific conditions. All three conditions should be present, not just one.

Strong domain boundaries. The business responsibilities are clear enough that service boundaries support them rather than fragment them artificially. If the domain model is still unclear, drawing service boundaries is guesswork, and guessed boundaries tend to create expensive cross-service dependencies. The capability map from Phase B should provide the domain clarity. If it does not, the microservice boundaries will be invented rather than discovered.

Real deployment pressure. Different parts of the estate genuinely need to change or scale at different speeds. If the entire system changes together on the same cadence, independent deployment boundaries add coordination cost without delivery benefit. The test is straightforward: do the proposed services actually release independently, or do they always change together?

Operating maturity. The enterprise can support observability, release discipline, interface governance, and failure management at the required level. Microservices without mature observability create systems that nobody can debug. Microservices without interface governance create systems that nobody can change safely. Microservices without distributed-tracing capability create systems where latency problems are invisible until customers complain.

39.3 When the pattern is a poor fit: the decision framework

The enterprise should resist microservices when the conditions that justify them are absent. G21I does not say "never use microservices." It says: test the conditions first and choose the simplest decomposition that meets the business need. Here is the full decision framework.

Condition 1: The organisation cannot support the operating overhead. Observability, release coordination, interface versioning, failure propagation, and distributed tracing all require capabilities that many enterprises have not yet built. If the enterprise does not have these capabilities and cannot realistically build them before the system goes live, microservices will create more problems than they solve.

Condition 2: The domain boundaries are still unclear. Service boundaries drawn without domain confidence tend to split things that belong together and join things that should be separate. The rework cost when boundaries are wrong is much higher in a distributed system than in a modular monolith because every boundary change affects network communication, data ownership, failure handling, and interface contracts.

Condition 3: A modular monolith would meet the business need. Not every system needs independent deployment boundaries. A well-structured modular application with clear internal boundaries can deliver most of the maintainability benefits of microservices without the distributed-systems overhead. Internal module boundaries are cheap to move. Service boundaries are expensive to move.

Condition 4: The team structure does not align with service boundaries. Conway's Law is not optional. Service boundaries that cross team boundaries create communication overhead that offsets the architectural benefit. If one team owns three services and another team needs to change two of them for every release, the service boundaries are creating coordination cost rather than reducing it.

If two or more of these conditions are present, a simpler architecture is almost certainly the better choice.

Common misconception

Microservices are the default modern answer for any new system.

Treating microservices as the default usually hides the fact that the real domain and operating questions have not yet been settled. The pattern is not wrong. It is expensive when adopted without the conditions that justify it. A modular monolith with clear internal boundaries is a perfectly valid and often superior architecture choice.

39.4 The monolith-first strategy

A monolith-first strategy means starting with a well-structured modular application and extracting services only when the conditions justify extraction. This approach has significant advantages that enterprise architects should understand.

Domain boundaries emerge from experience. When the system runs as a modular monolith, the team discovers the real domain boundaries through operational experience rather than guessing them from a whiteboard exercise. The modules that change at different speeds, the modules that scale differently, and the modules that are owned by different teams become visible over time.

Extraction is cheaper than correction. Extracting a well-defined module into a separate service is a relatively straightforward operation. Merging two microservices that were split incorrectly is a much harder operation because data ownership, interface contracts, and failure handling all need to be reunified.

Operating capability builds gradually. The enterprise can build its observability, distributed tracing, and interface governance capabilities incrementally as services are extracted, rather than needing all of that infrastructure in place before the first line of code runs.

Delivery is faster initially. A modular monolith is simpler to develop, test, deploy, and debug than a distributed system. The team can focus on business value rather than distributed-systems infrastructure during the critical early delivery period.

The monolith-first strategy is not anti-microservice. It is pro-evidence. It says: earn your service boundaries through operational evidence rather than assuming them from architectural theory.

Structured systems visual used here to suggest service decomposition decisions being judged against operating reality
Service decomposition decisions should be judged against domain clarity, deployment pressure, and operating maturity, not against engineering fashion.

39.5 The four-dimension decomposition assessment

A structured decomposition assessment should cover four dimensions before the architecture commits to microservices. For each dimension, the architecture team should rate the enterprise's readiness and record the evidence.

  1. Domain quality. Are the business boundaries clear, stable, and well understood? Evidence: the capability map and value-stream analysis from Phase B should provide clear domain boundaries. If Phase B produced unclear or contested boundaries, service decomposition should wait.
  2. Deployment pressure. Do different parts of the system genuinely need to change or scale independently? Evidence: release history, change-request patterns, and scaling requirements should show that different modules have genuinely different change rates or load profiles.
  3. Operating capability. Can the enterprise support observability, interface governance, failure management, and release discipline for the number of services proposed? Evidence: existing tooling, team skills, incident-response capability, and platform maturity should all be assessed honestly.
  4. Team structure. Does the team structure align with the proposed service boundaries? Evidence: the team topology should show that each service can be owned, developed, tested, and released by a single team without requiring coordination across team boundaries for routine changes.

Scoring. If two or more of these dimensions score weakly, a simpler architecture is almost certainly the better choice. The assessment should be recorded as an architecture decision so that the reasoning is visible to the Architecture Board and can be revisited when conditions change.

39.6 Emerging technologies and the ADM: the G21D approach

The TOGAF Series Guide G21D addresses how enterprise architecture should handle emerging technologies such as artificial intelligence, the Internet of Things, blockchain, and edge computing. The guide's central message is that emerging technologies should be assessed through the existing ADM lens rather than managed through separate innovation tracks that bypass architecture governance.

The G21D principle: assess through the ADM, do not create separate tracks

When an enterprise encounters a new technology, the temptation is to create an innovation lab, a proof-of-concept programme, or a separate delivery track that operates outside normal architecture governance. G21D argues that this approach creates two problems. First, it delays the integration decision: the proof of concept succeeds, but nobody has assessed how the technology fits the enterprise landscape, data architecture, security posture, or operating model. Second, it creates "innovation theatre" where experiments run in isolation and never connect to enterprise-level change.

The alternative is to assess emerging technologies within the ADM. Phase A asks whether the technology addresses a genuine business driver. Phase B asks what capabilities it would support or replace. Phase C asks what data it produces, consumes, or transforms. Phase D asks what infrastructure, integration, and security implications it creates. This is not bureaucracy. It is the minimum assessment that prevents the enterprise from adopting technology without understanding its consequences.

Innovation theatre versus architecture-governed experimentation

Innovation theatre produces demonstrations that impress stakeholders but never reach production because nobody planned for integration, data governance, security, or operating cost. Architecture-governed experimentation uses the same ADM disciplines but applies them proportionately: a lightweight Phase A assessment for the business case, a focused Phase D assessment for technical feasibility, and explicit criteria for when the experiment graduates to the enterprise roadmap or is retired.

The distinction matters because enterprises often accumulate dozens of proofs of concept that consume engineering time without producing enterprise value. G21D recommends that every emerging-technology experiment has a defined exit condition: either it graduates into the architecture landscape through a governed ADM process, or it is explicitly retired with lessons recorded.

Practical assessment for common emerging technologies

Artificial intelligence and machine learning. The architecture questions are about data quality, model governance, explainability requirements, and where AI outputs enter decision chains. An AI model that produces predictions used in regulatory reporting has different architecture requirements from one that suggests internal process improvements.

Internet of Things and edge computing. The architecture questions are about data volume, network dependency, edge-to-cloud integration patterns, security at the device level, and lifecycle management for large device estates. IoT creates concentration risk if all devices depend on a single cloud platform.

Blockchain and distributed ledger. The architecture questions are about whether the trust model genuinely requires distributed consensus or whether a traditional database with audit trail would serve equally well at lower complexity. Most enterprise blockchain proposals fail the simpler-alternative test.

Emerging technologies should be assessed through the ADM, not around it. A proof of concept that bypasses architecture governance creates excitement without integration and demonstrations without consequences.

Working definition derived from G21D emerging technologies guidance - G21D, TOGAF and Emerging Technologies

G21D's core message is that architecture governance is not the enemy of innovation. It is the discipline that turns experiments into enterprise value.

39.7 London Grid: emerging technologies in the architecture

The London case involves several emerging technologies that need G21D-style assessment rather than separate innovation tracks.

Smart meters and IoT. The smart-meter estate is a large-scale IoT deployment. The architecture questions include: how does meter data flow from devices through concentrators to the head-end system? What happens when the communication network fails? How is meter firmware managed across millions of devices? These questions belong in Phase D, not in a separate innovation workstream.

Edge computing for fault prediction. If the enterprise explores edge analytics for predicting equipment failures from sensor data, the architecture assessment must cover data quality at the edge, the latency requirements for predictions that affect operational decisions, the security model for edge devices on the OT network, and the integration path between edge predictions and the distribution management system.

AI for network planning. Machine-learning models that predict demand growth or optimise connection offers must be assessed for data provenance (what training data, and is it representative?), model governance (who validates the model, and how often?), and regulatory acceptability (will Ofgem accept AI-generated capacity assessments in LTDS publications?).

In each case, the G21D approach is the same: assess through the ADM, define graduation criteria, and retire experiments that do not meet them. London does not need an innovation lab. It needs architecture discipline applied to emerging technology decisions with the same rigour as any other technology choice.

39.8 London Grid Distribution: decomposition in a utility

The London case is useful because some responsibilities may benefit from clearer service decomposition while others would become slower and more fragile if split too aggressively. Publication, planning, telemetry, and governance all have different change pressures.

Publication may benefit from separation. The LTDS publication pipeline has a distinct regulatory cadence (quarterly or annual), clear domain boundaries (data validation, formatting, and submission), and a different change rate from operational systems. The four-dimension assessment would likely show strong domain quality, real deployment pressure (regulatory deadlines differ from operational cadences), and manageable operating requirements. This is a reasonable extraction candidate.

OT telemetry should probably stay monolithic. The coupling between SCADA telemetry, the distribution management system, and protection relay management is tight. These systems change together on the same long cycle (three to five years). Breaking them into separate services would add distributed-systems complexity to a domain where simplicity and reliability are paramount. The four-dimension assessment would show weak deployment pressure and high risk from added complexity.

The connections workflow is a borderline case. The digital customer self-service channel has a faster change rate than the backend network assessment systems. If the team structure supports it, the customer-facing layer might benefit from separation. But the backend assessment depends on real-time network data from operational systems, creating tight coupling that service boundaries would need to manage carefully.

  • London should not be decomposed into microservices by default. Each decomposition decision should follow the four-dimension assessment.
  • The operating model is as important as the service diagram. A service architecture that the enterprise cannot observe, govern, or recover is worse than a monolith that it can.
  • Publication may benefit from separation because it has a distinct regulatory cadence. OT telemetry may not, because its coupling to operational systems is tight and its change rate is low.
  • A monolith-first strategy for the connections workflow would let the enterprise discover the real domain boundaries through operational experience before committing to service extraction.
Check your understanding (1 of 2)

An architecture team proposes decomposing a planning system into twelve microservices. The planning domain is still being modelled, the team has no distributed-tracing capability, and all twelve components change on the same quarterly release cycle. What is the strongest architectural concern?

What is the primary advantage of a monolith-first strategy?

Check your understanding (2 of 2)

A gas distribution company separated its publication service from its planning service and reports that the separation has reduced release coordination time. Which condition most likely explains the success?

Before committing to microservices, an enterprise architecture review should assess which four dimensions?

Key takeaways

  • Microservices change more than code structure. They change operating demands, team coordination, observability requirements, and governance complexity across the enterprise.
  • The four-dimension assessment (domain quality, deployment pressure, operating capability, team structure) should be completed and recorded before the architecture commits to microservices.
  • A modular monolith with clear internal boundaries is often the stronger choice when domain boundaries are unclear, deployment pressure is absent, or operating maturity is insufficient.
  • The monolith-first strategy is not anti-microservice. It is pro-evidence: earn service boundaries through operational experience rather than assuming them from theory.
  • Conway's Law is not optional. Service boundaries should align with team boundaries and communication patterns. Misaligned boundaries create coordination overhead that offsets the architectural benefit.
  • In London, publication may benefit from service separation due to its distinct regulatory cadence, while OT telemetry should stay tightly coupled because its change rate is low and reliability is paramount.
  • Emerging technologies (AI, IoT, edge computing) should be assessed through the ADM, not around it. G21D recommends architecture-governed experimentation with explicit graduation criteria rather than innovation theatre that bypasses governance.

Standards and sources cited in this module

  1. G21I, Using the TOGAF Standard with the TOGAF Series Guide for Microservices

    Full guide

    The primary TOGAF guidance for microservices and service decomposition decisions. Referenced throughout this module for enterprise-level decomposition reasoning.

  2. The TOGAF Standard, 10th Edition (C220)

    Part 1, Phase D and Part 4, Building Blocks

    The core standard for technology architecture and the building-block framework that underpins service decomposition decisions.

  3. G217, Using the TOGAF Standard in the Digital Enterprise

    Full guide

    Digital enterprise guidance relevant to multi-speed delivery, platform-as-capability, and the guardrail governance patterns that support controlled decomposition.

  4. G21D, TOGAF and Emerging Technologies

    Full guide

    The primary guide for assessing emerging technologies (AI, IoT, blockchain, edge computing) through the ADM lens. Referenced in Section 39.6 for architecture-governed experimentation.

  5. Digitalisation Strategy and Action Plan 2025-2030, Ofgem

    Full strategy document

    Regulatory context for London Grid Distribution that creates the distinct publication cadence justifying potential service separation.

You now understand when microservices help, when a simpler architecture is the stronger choice, and how to make the assessment in structured enterprise terms. The next question is where sustainability fits inside architecture work without becoming a vague moral slogan. That is Module 40.

Module 39 of 64 · Technology Architecture