Business intelligence, analytics, and decision support
This is the seventh of 10 Information Systems Architecture modules. It examines analytics architecture as a decision-support discipline rather than a dashboard design exercise. The module is anchored by G238, the TOGAF Series Guide on Business Intelligence and Analytics.
By the end of this module you will be able to:
- Explain why analytics architecture is broader than reports and dashboards
- Describe the four-layer decision-support chain from source information through semantics to consumer use
- Identify the trust questions that must be answered before analytics can support serious decisions
- Describe the G238 analytics architecture patterns and explain when each one applies
- Explain why the semantic layer is the most common point of failure in analytics architecture
- Apply decision-support thinking to London planning, publication, and governance scenarios

Real-world case · 2024
Fourteen dashboards. Zero confirmed semantic consistency.
A distribution network operator launched a transformation dashboard programme in early 2024. Within three months, the programme had produced fourteen dashboards covering planning, asset health, connections performance, and regulatory compliance. The executive team described the outputs as visually impressive.
The head of strategy asked a harder question: "Can we explain where the numbers come from, whether the definitions are consistent across dashboards, and what decisions each dashboard is supposed to support?"
The team could answer some of those questions for some dashboards. It could not answer all of them for any dashboard. The headline connections-performance metric used a different denominator from the one reported to the regulator. The asset-health score blended two data sources without documenting which one took precedence when they disagreed. The dashboards looked like decision-support tools. Without the architecture underneath them, they were attractive illustrations built on unresolved trust questions.
If an executive team describes a set of dashboards as visually impressive but cannot confirm that the definitions are consistent across them, are the dashboards decision-support tools or attractive illustrations?
That story illustrates why dashboards are not the architecture. Decision support depends on information quality, semantic consistency, refresh logic, ownership, and the consumer's ability to interpret the result correctly.
32.1 Why dashboards are not the architecture
Dashboards are visible, which makes them easy to mistake for the whole analytics story. G238's value is to pull the learner back to the supporting structure. Decision support depends on information quality, semantic consistency, refresh logic, ownership, and the consumer's ability to interpret the result correctly.
A dashboard is the presentation layer of a much deeper architecture. If the layers underneath are weak, the dashboard will be attractive but misleading. G238 describes analytics architecture as the complete chain from authoritative sources through semantic meaning and calculation logic to the analytical products that consumers use for decisions.
“Analytics architecture defines the enterprise discipline of designing the chain from authoritative source information through semantic meaning, calculation logic, and refresh rules to the analytical products that consumers use for operational, planning, governance, or publication decisions.”
TOGAF Series Guide G238 - G238, Business Intelligence and Analytics
Without that chain, a dashboard can be attractive and still be misleading. The architecture's job is to make the chain visible and governable.
32.2 The four-layer decision-support chain
G238 structures analytics architecture around four connected layers. Each layer serves a different purpose and has different governance needs.
Layer 1: Source information. The data and events that feed the analytical view, with their authority and quality constraints. If the sources are not governed, nothing downstream can be fully trusted. The authority assignments from Module 28 determine which sources are legitimate for each analytical purpose.
Layer 2: Semantic layer. The shared meaning, calculation logic, and contextual rules that make the analytical view interpretable. This is where definitions, units, filters, and aggregation rules are settled. It is also where most invisible inconsistencies hide. If two dashboards define "connections performance" using different denominators, the semantic layer is ungoverned.
Layer 3: Analytical product. The report, dataset, dashboard, or model output the consumer actually sees and uses. This is the visible layer that gets the most attention, but it is only as strong as the layers underneath it. The metadata from Module 31 should travel with each analytical product.
Layer 4: Decision use. The operational, planning, governance, or publication decision that depends on the analytical result. This is the layer that gives the whole chain its purpose. An analytical product without a named decision use is an output without a customer.
The chain operates top-down for design (start with the decision, work back to the sources) and bottom-up for validation (trace from sources through semantics to the decision to confirm the chain is trustworthy). G238 recommends both directions.
32.3 Why the semantic layer is the most common failure point
The semantic layer is where most analytics architecture fails because it is the least visible and the most easily skipped. Source data is tangible. Dashboards are visible. But the shared meaning that connects them is abstract, and teams under delivery pressure often assume it rather than design it.
Three failure patterns are common. First, different dashboards use different definitions for the same business concept, creating conflicting views that confuse governance. Second, calculation logic is embedded in individual reports rather than maintained in a shared layer, making it impossible to confirm consistency. Third, the semantic layer exists on paper but is not enforced, so individual teams override it with local definitions that gradually diverge.
The architecture team's role is to establish the semantic layer as a governed asset and ensure it is maintained alongside the analytical products it supports. G238 positions semantic governance as the single most important analytics architecture discipline.
32.4 Analytics architecture patterns from G238
G238 describes several analytics architecture patterns that help the team choose the right structural approach for different decision-support needs.
Centralised analytics pattern. All analytical products are produced from a single governed data warehouse or lake. This pattern works well when the enterprise needs tight semantic control and the analytical workload is predictable. London regulatory reporting might suit this pattern.
Federated analytics pattern. Different teams manage their own analytical products but share a common semantic layer. This pattern suits enterprises where different business units have different analytical needs but must report using consistent definitions. London planning and operations might each maintain their own dashboards while sharing a common definition of network headroom.
Self-service analytics pattern. Individual users create their own analytical views from governed source data. This pattern suits exploratory analysis but carries higher semantic risk because individual users may interpret shared data differently. The architecture should define which source datasets are available for self-service use and what semantic guardrails apply.
Embedded analytics pattern. Analytical capabilities are built into operational applications rather than existing as separate reporting tools. This pattern suits operational decisions that need real-time or near-real-time insight. London fault-detection and condition-monitoring dashboards might follow this pattern.
Common misconception
“Building dashboards quickly is the best way to demonstrate analytics value.”
When teams celebrate the dashboard before they have settled the semantic layer, the architecture is usually moving too fast for the trust level it needs. The visual layer is the easiest part. The hard part is what sits underneath it.
32.5 Questions that make analytics trustworthy
Four trust questions should be answerable for any analytical product the enterprise uses for serious decisions.
- Which authoritative sources feed the view and what assumptions sit behind them? If the data team cannot name the sources and their authority status, the analytical product is built on undisclosed assumptions.
- Are the measures and categories defined consistently across consumers? A metric that means one thing on one dashboard and something different in a regulatory submission is a governance liability.
- How fresh does the information need to be for the decision it supports? A planning metric refreshed quarterly may be fine for strategic review. The same metric used for weekly operational decisions may be dangerously stale.
- Who owns the metric, the semantic logic, and the consumer guidance? Ownership determines who resolves inconsistencies and who updates the definition when business conditions change.
London Grid Distribution
The London case uses analytics architecture for planning visibility, operational insight, publication outputs, and board-level transformation oversight. Those are different decision uses with different timeliness, quality, and audience needs.
- A London metric is only useful if its authority, meaning, and decision purpose are all clear.
- Good analytics architecture protects both speed and trust instead of sacrificing one to obtain the other.
- The London planning dashboard, the regulatory submission, and the board transformation report all need consistent semantics from a shared semantic layer.
- Different London analytical needs may suit different G238 patterns: centralised for regulatory reporting, federated for cross-team dashboards, embedded for operational monitoring.
- The Phase C pack should document which pattern applies where and why.
A transformation programme produces fourteen dashboards in three months. The executive team describes them as visually impressive. The head of strategy asks whether the definitions are consistent. The team cannot confirm. What layer of the analytics architecture is most likely weak?
A board member notices that the connections-performance metric on the transformation dashboard differs from the figure in the regulatory submission. Both outputs are produced by the same data team. What is the most likely root cause?
A programme board asks whether a planning dashboard can support quarterly investment decisions. The dashboard refreshes annually. What trust question has been missed?
Key takeaways
- Analytics architecture includes sources, meaning, ownership, freshness, and decision use. A dashboard is only one visible part.
- The four-layer decision-support chain (source, semantic, product, decision use) is the framework G238 uses to structure analytics architecture.
- The semantic layer is the most common point of failure because it is the least visible and most easily skipped.
- G238 describes four analytics patterns: centralised, federated, self-service, and embedded. Different London needs suit different patterns.
- Trustworthy analytics depends on semantic and authority discipline, not just visual design.
- Decision context matters as much as visual design. An analytical product without a named decision use is an output without a customer.
Standards and sources cited in this module
G238, Business Intelligence and Analytics
Full guide
Primary guide for analytics architecture patterns and the decision-support chain within the TOGAF ecosystem.
Full guide
Provides the information-domain context that analytics decisions depend on.
Full guide
Connects metadata discipline to the semantic layer that analytics trustworthiness requires.
You now understand analytics as a decision-support chain with four layers and multiple patterns. The next question is: how does application architecture define responsibilities before products, using ABBs and SBBs? That is Module 33.
Module 32 of 64 · Information Systems Architecture
