§5. Layers of the Cypherpunk Stack
Copy/paste (plain text):
Jason St George. "§5. Layers of the Cypherpunk Stack" in Next‑Gen Store of Value: Privacy, Proofs, Compute. Version v1.0. /v/1.0/read/part-i/5-layers-of-the-stack/ §5. Layers of the cypherpunk stack
The threat model makes one thing clear: this is not a single‑layer contest between “good” and “bad” money. It is a contest across an entire stack of machinery, from silicon and power all the way up to governance and law. States under stress will not only try to tax balances or censor individual transactions; they will reach for routing, hardware mandates, identity schemes, update channels, and legal definitions of “safety.” If we want Privacy, Proofs, and Compute to behave like monetary primitives rather than platform features, we have to meet them on all of those planes.
It is useful, therefore, to make the architecture explicit as a set of layers. We already speak of Layer 0: Verifiable Machines, which means open hardware and sampled supply chains as base reality. Above that, the thesis has described triad‑centric layers in terms of Create/Compute → Prove → Settle → Verify, with Telemetry & Governance threaded through. We can now refine this into a seven‑layer stack that stretches from matter to institutions:
- Layer 0: Verifiable Machines (and power)
- Layer 1: Reachability (communications and transport)
- Layer 2: Distribution & Execution (software supply and runtime)
- Layer 3: Identity & Claims (who may act, without doxxing)
- Layer 4: Truth & Work (proofs, VerifyPrice, and useful compute)
- Layer 5: Value & Settlement (privacy rails and non‑custodial flow)
- Layer 6: Governance & Telemetry (keeping the system honest)
The rest of the document can then be read as a tour through these layers.
Layer 0 remains the bedrock. Verifiable Machines are the point where cryptography stops being metaphor and touches matter. The chips that sign, prove, randomize, and capture must not be opaque priesthood artifacts; they must be at least partially inspectable and sampled. That means open RTL or microarchitectures where we can get them, open PDKs where politics allow, and structured lot‑sampling and imaging where we cannot. It also means treating power and physical plant as part of Layer 0 rather than an afterthought: a prover farm or router that cannot withstand a planned blackout is as brittle as a closed chip. In practice, Layer 0 is “verifiable machines on verifiable power,” which means open designs, measurable side‑channel budgets, lot attestations, and micro‑grids or backup arrangements that keep the hardware honest and powered when stress arrives.
On top of that sits Layer 1: Reachability. If Layer 0 asks “can we trust what the machine does?”, Layer 1 asks “can we talk to it at all?” A repression toolkit that has already learned to pull levers on subsea cables, IXPs, DNS roots, and mobile carriers will aim squarely at transport. Here the goal is simple: make it hard to turn the network off without turning the country off. Concretely, that means encrypted transports that blend into ordinary traffic, obfuscation that defeats naive DPI and active probing, alternate paths through satellite and radio when fiber is throttled, and enough diversity of routes that there is no single “kill switch.” In the stack metaphor, Layer 1 is the oxygen line—if it fails, proofs and privacy become academic. The communications section we introduce later makes this concrete: Tor‑class pluggable transports, encrypted handshakes, refraction networking where cooperators exist, plus sat/mesh/sneakernet fallbacks all belong here.
Layer 2: Distribution & Execution answers a quieter but equally potent adversary move: “if we cannot block the packets, we will block the programs.” App stores, CDNs, corporate MDM policies, and automatic update systems are powerful levers. A network whose clients can only be installed or updated with the blessing of a small set of platforms is not neutral; it is merely waiting to be deputized. Layer 2 is where we insist that the code that speaks triad‑money can circulate without anyone’s permission. Content‑addressed binaries, signed manifests mirrored across jurisdictions, USB/QR/audio installers, and runtimes that can be fetched and verified over whatever transport is available are the practical elements. Execution environments matter too: if the only viable runtime is a locked‑down phone OS wired to one app store and one identity scheme, the repression toolkit has already won. The software distribution section we add later sits here, making update and execution independence a first‑class requirement.
If Layer 2 makes sure the code can run, Layer 3: Identity & Claims makes sure it can express “who” is acting without collapsing into real‑name KYC. The adversary here is the growing web of SIM registration, network identity, and “safety” regimes that treat anonymity itself as suspicious. The stack does not deny that some actions must be accountable; it refuses to equate accountability with global doxxing. At this layer, keys, credentials, and policies are defined in a way that allows entities to prove the right to act (spend, sign, operate a prover) without binding that right to a civil identity unless absolutely necessary. Anonymous or pseudonymous credentials, selective‑disclosure proofs, and reputation linked to receipts and behavior rather than to phone numbers belong here. When we later speak of viewing keys, lawful‑privacy corridors, and audit‑friendly receipts, we are concretely filling in Layer 3.
Only once machines are honest, packets can move, code can run, and actors can be named without being exposed do we reach Layer 4: Truth & Work, the core of the original thesis. This is where proofs, VerifyPrice, and useful work live. The job of this layer is to answer, cheaply and publicly, the question “did this actually happen as claimed?” It is here that verification asymmetry is engineered: MatMul‑PoUW constructions and ZK proof systems that make checking an order of magnitude cheaper than doing; harnesses that measure p50/p95 verifier time and cost for each workload; and markets that pay only for receipts that pass those checks. In the earlier language of the loop, Layer 4 is the Prove/Verify spine: the circuits, proofs, and reference verifiers that turn “trust me” into “verify me” at scale. Much of the existing text (Verification Asymmetry & VerifyPrice, Proof Factories, AI‑PoUW, the modular stack) already describes this layer in detail.
With Layer 4 in place, Layer 5: Value & Settlement can safely be money‑like. This is where privacy rails, atomic swaps, shielded pools, and cross‑chain bridges live. Its task is double: move value without custody or censorship, and do so in a way that preserves privacy by default while leaving a path for auditable disclosure. Here the adversary is both the classical financial repression toolkit (negative real yields enforced through custodians and captives) and the newer chokepoints of KYC’d exchanges, surveilled payment processors, and compliant stablecoin issuers. The response is not rhetoric; it is architectures that keep custody at the edge, settlement neutral in the middle, and content private unless a viewing key is invoked. Adaptor‑signature swaps between BTC and privacy assets, lawful‑privacy corridors for treasuries, bridge designs that avoid trusted relays, and settlement telemetry (success rates, refund safety, anonymity‑set health) are the concrete pieces. The Privacy Rails sections, and the new communications and identity work, all culminate here.
Finally, Layer 6: Governance & Telemetry closes the loop. If the lower layers are the nervous system, this is the cortex and immune system: how the network notices drift, resists capture, and adapts under stress. Without this layer, even beautifully engineered proofs and privacy corridors will decay into the usual pattern of oligopolies and black boxes. In practice, Layer 6 is where VerifyPrice observatories, decentralization dashboards, corridor‑health monitors, and incident reports live, along with the institutions that respond to them. It is also where operational and legal posture is set: how routers are constrained so they cannot quietly privilege “house” flow; how upgrades are staged so Layer 0 and Layer 4 stay aligned; how incident response works when a corridor breaks or a hardware profile is compromised; how policy engagement happens without creating a single political point of failure. The mantra “no dashboards, no trust” belongs to this layer: if the public cannot see verification cost, neutrality, censorship, and settlement health, the system has already slid back toward platform trust.
Seen this way, the cypherpunk monetary stack is not one clever consensus mechanism but a ladder of seven interlocking layers. Layer 0 and Layer 1 keep the silicon honest and the packets flowing. Layer 2 and Layer 3 ensure that code and identity cannot be quietly turned into chokepoints. Layer 4 turns work into truth with receipts anyone can verify. Layer 5 turns those receipts into private, non‑custodial settlement. Layer 6 watches the whole organism and pushes it back toward neutrality when it drifts. The sections that follow simply walk this ladder: beginning with verifiable machines, extending upward into communications resilience, distribution, and identity, and then re‑entering the territory already mapped (proofs, compute, settlement, governance) with a clearer sense of what each layer must survive.
5.1 Modules in outline: applications and primitives
The layered picture tells us where things must live; the modular picture tells builders what they actually touch. We will use four reference applications as running examples throughout the rest of the thesis:
-
Private treasury & payroll: Pays staff and vendors non‑custodially over privacy rails, emits receipts that auditors can check, and treats “who got paid what” as a matter of viewing keys, not public gossip.
-
Media provenance & authenticity: Cameras, sensors, and editors emit cryptographic lineage that survives platform stripping; payments to creators and data providers are conditioned on the presence of such receipts.
-
Verified inference: A market where model owners and service providers sell inferences that come with succinct proofs or hybrid proofs‑of‑logits, all priced and measured via VerifyPrice.
-
Proof/compute procurement: A rail where developers and treasuries can buy standardized units of “proofs” and “verified FLOPs” as futures or spot capacity and treat them as balance‑sheet items next to BTC.
To support these, we rely on a small set of reusable primitives:
- A Proofs‑as‑a‑Library SDK (PaL) that lets developers declare claims (“prove this computation,” “prove this provenance”) and compiles them to multiple proving backends and useful‑work miners.
- A Privacy Rails Kit (PRK) that expresses pay‑for‑proof intents and executes refund‑safe settlement over BTC↔ZEC/XMR corridors or shielded pools.
- A Proof Interface Definition Language (PIDL) that defines the minimal receipt that binds a claim, a proof, a workload ID, an SLA tier, and timestamps into a portable object.
- A VerifyPrice observatory that provides public metrics for how cheap verification actually is for each canonical workload.
- A set of market and telemetry primitives (SLA escrows, neutral routers, bridge‑safety templates) that make it possible to treat proofs, privacy capacity, and verified FLOPs as commodities rather than favours.
Later sections turn this into a full “substrate kit” of twelve primitives and four reference applications. Until then, we will use the names PaL, PIDL, privacy‑rails kit, proof factories, and VerifyPrice observatory as shorthand for these building blocks.
Tip: hover a heading to reveal its permalink symbol for copying.