
Spam, Storage, and the 16 KB Data Field
A framework for reasoning about resource abuse in Zenon’s dual-ledger architecture
Stark · April 2026
Status: Research framing — not a specification. Intended for the tminusz developer commons.
TL;DR: 16 KB isn’t excessive — it’s a reasonable ceiling for a chain designed for the post-quantum transition.
1. The Spam Problem
Every public ledger has to deal with spam. Most chains handle it with fees — you pay per byte, the money goes to validators, and the cost of filling the chain with garbage scales linearly with how much garbage you want to put there. The fee is irrecoverable, so the attacker loses real money on every transaction.
Zenon doesn’t work this way. There are no fees. Every transition requires plasma, and plasma can be generated two ways: compute a proof-of-work, or fuse QSR tokens for sustained capacity. The PoW path means any device can transact with zero token balance — a browser, a sensor, an AI agent, a first-time user with nothing in their wallet. No gas token acquisition, no permission.
That access model raises the spam question directly. If anyone can write to the chain by burning CPU cycles, and sustained users can lock QSR that gets returned to them afterward, and account-blocks can carry up to 16 KB in the data field — what stops an attacker from flooding the ledger?
2. Admission Control
The two plasma generation paths operate at different timescales:
PoW plasma (per-transaction). A computational proof that generates plasma for a single transition. Small enough for phones, large enough to throttle automated spam, adjustable over time. This is what makes zero-balance access possible — you pay with CPU-seconds instead of capital. It bounds the rate at which any single source can submit transitions regardless of how much QSR they control.
Fused QSR plasma (per-account). Locking QSR to the protocol generates ongoing plasma capacity. Bigger data payloads cost more plasma, which means more locked QSR to sustain any given throughput. And QSR has a fixed supply — every unit fused for spam is unavailable for anything else. So plasma quantizes throughput through proof-of-stake, and the question “can someone spam the network?” becomes “can someone acquire and lock up enough of a finite token to monopolize throughput?” That’s the same capital-control question every PoS system has to answer. It just looks unfamiliar because PoS is usually applied to consensus (who produces blocks) rather than throughput (who consumes resources). The analysis is the same either way: capital requirements scale with the ambition of the attack.
Adaptive difficulty (open question). One idea that has been discussed but never specified: a global tuning layer that adjusts plasma difficulty in response to observed state growth [1]. If aggregate load exceeds a target, difficulty rises, forcing progressively more capital lockup or computation. This would affect how expensive it is to create transitions without changing how light clients verify the chain. Whether this takes the form of an automatic algorithm, a governance parameter, or something else entirely is an open design question.
3. Why 16 KB
The data field ceiling is calibrated against the proof systems that exist today and the cryptographic transition ahead [2]:
| System | Size | Why it matters |
|---|---|---|
| Groth16 / PLONK | 128–500 B | What every L2 publishes today. |
| Falcon-512 | ~666 B | Smallest practical PQ signature (NIST Level 1). |
| Dilithium 2–5 | 2.4–4.6 KB | Lattice-based PQ signatures (NIST Levels 2–5). |
| SPHINCS+-128f | ~7.9 KB | Hash-only PQ signature. Most conservative assumption. |
| Halo 2 | 5–10 KB | Recursive ZK, no trusted setup. Zcash Orchard. |
| STARKs (raw) | 40–200 KB | Always compressed before on-chain. Never published raw. |
Everything left of the Zenon line fits. What doesn’t fit is either always recursively compressed before going on-chain or impractical for lightweight devices.
Today this is a consensus rule. It could become a governance-adjustable parameter in the future, but the reasoning for the current value still matters: too low and the post-quantum transition forces a disruptive change, too high and the spam surface grows unnecessarily. 16 KB gives about 2x headroom over the worst-case practical PQ signature, and you only pay for what you actually use.
4. What the Chain Actually Stores
The spam concern assumes every byte in the data field gets stored permanently by every node. That doesn’t have to be how it works.
The Interstellar OS concept specification [3] sketches one approach: separate the Verification Kernel (trusted) from the Client Layer (untrusted), and make payload retrieval a Client Layer responsibility. Under this model, the kernel just checks that whatever bytes arrive match the recorded hash — it doesn’t care where they came from. The Commit Channels protocol [4] extends this idea with multiple payload schemes: inline (embedded in the transaction), content_addressed (off-chain at a pointer), and several encrypted and structured variants. For off-chain payloads, the on-chain footprint is the hash plus metadata, not the full payload. The protocol is upfront that it doesn’t solve payload availability — that’s the application designer’s problem [4].
These are concept specifications, not shipped implementations. But the architectural principle they illustrate is sound and not unique to Zenon: separate commitment from storage, and the chain’s per-node burden drops to the size of a hash regardless of how large the original payload was. If something like this gets built, 16 KB in the data field doesn’t mean 16 KB of permanent storage per block per node. The chain stores the commitment. The bytes live wherever they live, and the hash proves the content regardless of where you got it from.
5. Layered Defense
No single mechanism handles this alone. Together they make the cost of sustained spam scale across independent dimensions:
Micro-PoW. CPU cost at submission. Prevents zero-cost bulk flooding.
Plasma from finite QSR. Capital lockup for sustained throughput. Same PoS economics as any stake-weighted system, applied to resource access.
Adaptive difficulty (if specified). Plasma requirements that adjust based on network load [1].
Commitment/storage separation. If the storage architecture separates commitments from payloads (as proposed in [3][4]), the chain stores a hash and the bytes live elsewhere.
6. Open Questions
If an adaptive difficulty layer gets built, it has to address these questions:
Adjustment granularity. Per-Momentum, daily, or rolling window? Fast adjustment risks oscillation; slow adjustment may not respond quickly enough.
Scope. Global (everyone pays the same increase), per-account (heavy users pay more), or per-transition-type (large payloads governed separately)? Per-type adjustment could protect proof publishing from being priced out by unrelated spam.
Interaction with proofs. If adaptive difficulty treats all data-field usage uniformly, spam-triggered difficulty increases also hit legitimate proof publication. Distinguishing structured proofs from arbitrary bytes is hard without adding semantic interpretation to consensus.
Signal gaming. Whatever signals drive DP adjustment must not be manipulable by the attacker.
Worst-case honest-user cost. If difficulty increases price out legitimate users before they price out the attacker, the mechanism has failed. This is the hardest open problem.
7. Conclusion
Zenon’s spam question is the standard PoS capital-control question applied to throughput instead of consensus. The 16 KB data field is sized for the post-quantum transition. The storage architecture — if built as proposed — separates what the chain commits to from what it stores.
What remains open is whether an adaptive difficulty layer gets specified, and what form the commitment/storage separation takes in practice. This essay provides the framework for evaluating that work when it arrives.
References
[1] Dynamic Plasma remains an open research area within the Zenon ecosystem. No finalized specification exists. The conceptual model — adaptive plasma difficulty targeting sustainable state growth — is described in community research drafts. The open questions in Section 6 define the primary research surface.
[2] Proof size figures drawn from published parameters: Groth16 (Groth, 2016), PLONK (Gabizon, Williamson, Ciobotaru, 2019), Halo 2 (Electric Coin Company / Zcash), NIST Post-Quantum Cryptography standardization (FIPS 204: ML-DSA / Dilithium, FIPS 205: SLH-DSA / SPHINCS+, FIPS 206: FN-DSA / Falcon). STARK sizes from published StarkWare benchmarks.
[3] Interstellar OS: Sovereign Verification Kernel Specification (concept spec), S12.5 (Storage Client) and S5.6 (Module Archive). “The kernel’s security is unaffected by storage source because all payloads are hash-verified.”
[4] Zenon Commit Channels Protocol Specification v1.5, S12 (Payload Schemes and Payload Criticality). “The chain does not enforce payload availability.”