
ISO 20022: A Blockchain Odyssey
Blockchain Architecture in Safety-Critical Engineering
Research Paper • February 2026
Abstract
ISO 20022 has become one of the most discussed topics in cryptocurrency. As the global financial messaging standard rolls out across SWIFT, the Eurozone, and major central banks, ISO 20022 related blockchains have attracted significant attention from investors who see compatibility as a signal that a blockchain is ready for institutional adoption.
This research explores ISO 20022 in the context of industrial standards.
Does a blockchain’s architecture align with the engineering principles that actually govern safety-critical financial systems?
Safety-critical standards across aviation, nuclear energy, automotive, medical devices, railway signaling, industrial control, cybersecurity, and financial markets have refined standards over decades of experience and scientific research. The resulting wisdom is formalized in binding international standards that determine whether systems are qualified to control systems where failure carries serious consequences.
This paper examines those principles, maps them to blockchain architectural choices, and demonstrates that Zenon’s Network of Momentum, through its verification-first design, achieves a degree of structural alignment with safety-critical engineering that merits serious attention as blockchain use cases expand into higher-stakes domains.
The Core Distinction: Protocol Layer vs. Application Layer
There is a fundamental difference between a system that is safe and a system where you can build something safe on top of it.
When a safety property exists only at the application layer, every developer must independently get it right. Every smart contract, every access control mechanism, every conditional transaction is a bespoke implementation that must be individually audited, individually tested, and individually trusted. If one developer makes a mistake, that instance is unsafe even though the one next to it might be fine. The underlying protocol does not care. It executes whatever code it is given.
When a safety property exists at the protocol layer, it is inherited. Every participant gets it automatically because the network enforces it. You cannot opt out of it and you cannot implement it incorrectly, because you are not implementing it at all. It is how the system works.
In safety-critical design, this is the difference between a backup generator installation that relies on the homeowner to execute a procedure of events in the proper order, versus an electromechanical switch that can only route electricity in one of two correct ways. The difference is measurable in the deaths of electrical linesman.
As it pertains to blockchain design, the distinction is not figurative. It is the basis of how regulators across all industries think about systemic risk, and it runs through every safety-critical standard examined in this paper.
Financial Standards: Where the Distinction Is Already Law
The protocol-vs-application distinction is not an abstraction in financial regulation. It is codified and enforced.
SEC Rule 15c3-5 (U.S. securities markets)
The SEC’s Market Access Rule exists because unverified orders entering the financial system pose systemic risk. A single erroneous algorithmic trade can cascade across interconnected markets in milliseconds. The rule requires that all orders pass through automated pre-trade risk controls before entering an exchange, verifying that they fall within declared credit and capital thresholds. If verification fails, the order is refused. The rule explicitly prohibits “naked” or “unfiltered” market access, where orders reach the exchange without passing through a verification layer, because the systemic consequences of unverified execution are too severe to tolerate.
The rule does not say “brokers may optionally build pre-trade risk checks.” It says the infrastructure itself must refuse unfiltered access. The safety property is mandatory and structural, not optional and application-level.
A verification-first blockchain protocol is the self-enforcing, autonomous equivalent: the network itself refuses unverified state transitions before they commit, not because a regulator requires it, but because the architecture makes any other outcome impossible. No developer has to remember to add the safety check. No auditor has to verify that the check was implemented correctly. The protocol is the standard.
An execution-first blockchain protocol, by contrast, is architecturally “naked” in exactly the sense the SEC was targeting. Applications can add verification logic, but the base layer will execute whatever it receives.
CPMI-IOSCO Principles for Financial Market Infrastructures (global payments and settlement)
The international standard for systemically important payment systems, central counterparties, and securities settlement systems. Where two linked obligations are exchanged in a transaction, the settlement of one must be conditional on the settlement of the other.
Financial market infrastructure must identify and manage operational risks across the system and its participants, and settlement finality must be a legally defined, verifiable moment. These principles govern the actual plumbing of the global financial system.
ISO 20022 (financial messaging)
ISO 20022 defines XML-based message formats for payments, securities, and trade finance. It governs data interchange: correct fields, proper structure, valid data types. It does not address the safety, correctness, or reliability of the systems processing those messages. ISO 20022 is a messaging format. SEC Rule 15c3-5 and CPMI-IOSCO PFMI are safety standards. The distinction is substantial.
Common Principles Across Safety-Critical Standards
The same philosophy encoded in financial market regulation appears across every safety-critical domain. Despite significant differences in risk profile and regulatory context, these standards converge on a consistent set of engineering principles:
1. Verified Correctness Before Operational Commitment
Safety-critical systems require that verification evidence be complete before commissioning and that ongoing operation maintains verifiable correctness. Unverified states must not propagate through the system.
2. Explicit Resource Bounds
Every safety function declares its required resources (time, memory, computational budget) and operates strictly within those bounds. There are no “best effort” guarantees in domains where failure carries serious consequences.
3. Designed Failure Modes
When a system cannot verify correct operation, it transitions to a known safe state. Systems are designed to refuse action rather than act on unverified information.
4. Safety Properties Independent of Liveness
Formalized by Alpern and Schneider’s 1985 Decomposition Theorem, safety (“nothing bad ever happens”) and liveness (“something good eventually happens”) are mathematically orthogonal properties. A well-designed system preserves safety even when liveness degrades.
The Standards
Every one of the following standards, across vastly different industries, encodes the same core requirement: systems must be verifiably correct before they are trusted to operate, they must declare the bounds of their operation explicitly, and when they cannot verify, they must fail safely rather than proceed on assumption. The higher the stakes, the more rigorous the verification demanded, but the underlying philosophy is identical whether the system is flying a plane, pacing a heart, cooling a reactor, or facilitating financial systems.
IEC 61508 (all industries): The parent functional safety standard. “Any safety-related system must work correctly or fail in a predictable (safe) way.” Defines Safety Integrity Levels (SIL 1–4) with progressively more rigorous verification requirements.
IEC 62304 (medical devices): Assumes software failure probability is 100%. The system must be designed to remain safe even when software fails.
NIST SP 800-207 (cybersecurity): The most explicit formulation of the verification-first principle among these standards: “Never trust, always verify.”
ISO/IEC 15408 (security evaluation): The Common Criteria. At the highest level (EAL 7), mathematical proof of correctness is required.
Verification-First vs. Execution-First Architecture
All blockchain systems perform some verification before execution, including signature checks, nonce validation, balance verification, and format validation. The architectural distinction is in whether verifiable transactions can be composed before commitment.
Execution-First Architecture
In execution-first designs, semantic verification requires re-execution. To confirm a state transition is correct, a node replays the computation that produced it. This creates several architectural characteristics:
Verification scales with execution complexity. As applications grow more complex, verification costs grow proportionally, creating pressure to delegate verification to specialized nodes.
Resource-constrained participants face a trust decision. A browser or mobile device that cannot re-execute a complex computation must either trust a full node’s attestation or forgo verification entirely.
Safety properties are coupled to liveness. Correctness guarantees require the network to be live and functioning. During partitions or consensus disruptions, pending state may be undefined.
Zenon’s Verification-First Architecture
Zenon’s Network of Momentum inverts the priority. Execution is constrained to produce verifiable outputs. Verification operates on cryptographic proofs rather than computation replay.
Dual-Ledger Separation. The architecture separates execution from commitment ordering:
Account-chain layer (execution): Each account maintains its own append-only ledger of state transitions in a block lattice, enabling parallel execution and data availability.
Momentum chain layer (commitment ordering): A global sequential ledger records cryptographic digests of account-chain state transitions, providing temporal ordering and global anchoring.
A verifier proves correctness through verifying proof rather than re-executing computation. This structurally separates the timeliness of ordering from the thoroughness of verification while also avoiding contextual interpretation and interference at the ordering layer. A transaction is ordered before its semantic context is considered.
A letter is postmarked and registered before it is delivered. Legal is a separate, subsequent service.
Bounded Verification. Every verification operation declares its storage, bandwidth, and computation bounds upfront. Verifiers operate within declared parameters, mirroring IEC 61508’s requirement that safety functions specify and respect explicit resource constraints.
You determine what resources you have and how much information you need to make a responsible commitment. If you need more information, you can correctly express that within the system.
Genesis Anchoring. Trust roots embedded at genesis allow any verifier, even after extended offline periods, to resynchronize by following cryptographic commitment chains back to the origin of the network.
Three-Outcome Verification (ACCEPT / REJECT / REFUSE). The architecture formalizes a verification model drawing on established concepts from three-valued logic (Lukasiewicz, 1920), hardware verification, and fail-silent failure modes in distributed systems.
If the only dumb question is the question you don’t ask, then what does that say about a system incapable of asking questions?
REFUSE is a correct and intelligent response which translates to, “I don’t know at this time.” It’s not just dumb failure or vapid refusal, it is a signal to the network that there was an information deficit. That’s a long way of saying that when a device cannot verify, Zenon lets it ask a question.
Truth-seeking begins with questions. Where questions are being asked, an economy of proof seeking and serving will assemble.
How it looks as a protocol:
When a verifier cannot cryptographically prove correctness within its declared bounds:
- It emits a refusal code and records a refusal witness
- The verifier remains a correct, functioning participant
- User interfaces surface “verification refused” as distinct from “failed”
This is fail-safe design applied to distributed verification. The system degrades gracefully, maintains safety, and provides explicit information about what cannot be verified, rather than requiring participants to choose between blind trust and disconnection.
Safety Independent of Liveness. The architecture ensures that correctness of verified facts is preserved even when the network is degraded, partitioned, or under attack. Liveness (eventual verification) may degrade under adversarial conditions, but safety (correctness of what has been verified) does not. This separation is formally grounded in the Alpern-Schneider Decomposition Theorem.
This matrix maps safety-critical principles to architectural properties. It represents structural alignment, not certification status.
Scope and Limitations
This analysis does not claim that Zenon is certified under any safety-critical standard. Formal certification requires domain-specific assessment processes that no blockchain has undergone.
This analysis does not claim that execution-first architectures are unsuitable for their current applications.
This analysis does claim that Zenon’s verification-first architecture — with its dual-ledger separation, bounded verification, genesis anchoring, REFUSE semantics, and safety-liveness independence — is structurally aligned with the universal engineering principles that safety-critical standards encode. As blockchain infrastructure expands into higher-stakes domains, this alignment represents a meaningful differentiator.
References
Standards
| Standard | Domain | Governing Body |
|---|---|---|
| IEC 61508 | Functional Safety (all industries) | IEC |
| DO-178C / ED-12C | Aviation Software | RTCA / EUROCAE |
| ISO 26262 | Automotive Functional Safety | ISO |
| IEC 62304 | Medical Device Software | IEC |
| IEC 61513 / 60880 / 62340 | Nuclear I&C Safety | IEC |
| EN 50128 / 50129 / 50716 | Railway Signaling | CENELEC |
| NIST SP 800-207 | Zero Trust Architecture | NIST |
| ISO/IEC 15408 | Common Criteria | ISO/IEC |
| IEC 62443 | Industrial Cybersecurity | IEC |
| SEC Rule 15c3-5 | Market Access Risk Controls | SEC |
| CPMI-IOSCO PFMI | Financial Market Infrastructures | BIS / IOSCO |
| ISO 20022 | Financial Messaging | ISO |
Theoretical Foundations
Alpern, B. & Schneider, F. (1985). Defining Liveness. Information Processing Letters, 21:181–185.
Fischer, M., Lynch, N., & Paterson, M. (1985). Impossibility of Distributed Consensus with One Faulty Process. JACM, 32(2):374–382.
Lamport, L. (1977). Proving the Correctness of Multiprocess Programs. IEEE Transactions on Software Engineering.
This analysis draws on Zenon Network’s community documentation and publicly available safety-critical engineering standards. It demonstrates structural architectural alignment and does not claim formal certification under any standard.