§28. Implementation Sketches for Builders
Copy/paste (plain text):
Jason St George. "§28. Implementation Sketches for Builders" in Next‑Gen Store of Value: Privacy, Proofs, Compute. Version v1.0. /v/1.0/read/part-vi/28-implementation-sketches/ §28. Implementation sketches for builders
Everything above is the blueprint. This section is about what to actually ship first if you want to see this stack exist in the world: how to get from “Privacy, Proofs, Compute as monetary primitives” to running code, live networks, and receipts that real applications depend on.
28.0 Minimum Viable Stack (MVS): What to Ship First
Builders will ask: “What is the minimal sequence?” The answer matters because dependencies are real: you cannot credibly claim “proof commodity markets” if receipts can be faked, and you cannot claim “privacy rails” if corridors aren’t refund-safe.
MVS Sequencing (dependency-true):
| Priority | Component | What It Enables | Dependency |
|---|---|---|---|
| 1 | PIDL receipt schema + reference verifier harness + dataset anchoring | Telemetry cannot be faked; receipts are portable and verifiable | None (foundation) |
| 2 | One corridor kernel (atomic + refund-safe) + VerifySettle dashboard | Non-custodial settlement with public health metrics | PIDL receipts |
| 3 | One proof factory backend for 1–2 canonical workloads + VerifyPrice dashboard | Proof commodity market with public pricing | PIDL receipts |
| 4 | PaL SDK + PRK SDK | Developer surfaces for “prove this” and “pay for this privately” | Corridor + proof factory |
| 5 | Neutral router + fairness tests + admission metrics | Open participation with visible neutrality | PaL/PRK + dashboards |
| 6 | PoUW pilots + wider consensus experiments | Useful-work security model | All above stable |
Why this order:
-
Receipts first: Without verifiable receipts, all claims are unpriced. Ship PIDL, reference verifiers, and dataset anchoring before anything else. This is the foundation that makes “no dashboards, no trust” real.
-
One corridor before many: A single refund-safe, non-custodial corridor with public VerifySettle is worth more than ten “coming soon” announcements. Ship BTC↔ZEC or BTC↔XMR with 100% refund safety before expanding.
-
One proof factory before PoUW: Prove that canonical workloads can be produced, verified, and priced before adding consensus complexity. PROOF_2^20 and MATMUL_4096 are good starting points.
-
SDKs after primitives: PaL and PRK are developer surfaces, not primitives. Ship them once the underlying corridors and proof factories are stable.
-
Neutral routing after basic markets: Fairness tests and admission metrics matter only once there’s enough activity to measure. Don’t over-engineer routing before you have provers to route.
-
PoUW last: Proof-of-useful-work consensus is the most complex piece. It requires stable proof factories, receipts, and measurement infrastructure. Don’t attempt it until Priorities 1–5 are solid.
Hardware profile clarity:
§28.1 mentions “commodity GPUs with well-understood drivers” as an “open-ish hardware profile.” To be explicit: this is Profile Class: Partially Closed, High Familiarity. GPUs are not verifiable machines in the same sense as open RTL—they are “known black boxes” whose behavior is well-characterized but not inspectable. Label profiles clearly so users understand the trust assumptions.
28.1 Layer-0 verifiable machines and proof factories
The stack cannot honestly claim “verify, don’t trust” if its base machines are unanalyzable. But we also can’t wait for a utopian, fully-open fab. The implementation path for Layer 0 is therefore incremental and pragmatic.
Scientific chips first, TEEs later. Start with small, fully open “security evaluation” chips: key-storage/signing cores, simple TRNG/PUF arrays, and test-vehicle TEEs with hardened side-channel and tamper responses. For these, publish RTL, netlists, and layout; attach a public bug bounty; and treat them as lab equipment for the community. The goal is not volume yet; it is to set a bar for what “inspectable hardware” looks like and to give auditors and academics something to tear apart.
Sampling and imaging as part of the product. For each tape-out, commit to a fixed sampling plan: draw devices from each lot via verifiable randomness, subject them to structured self-tests and optical/SEM imaging, and publish anonymized results. Those results become artifacts in their own right: “lot-attestations” that later proofs and receipts can reference as part of their hardware profile.
Expose hardware profiles to the software stack. In the Create/Compute step, make “which kind of machine did this?” a first-class field. A task or claim can specify “pure_zk_only,” “open_tee_v2 or better,” or “any backend.” Receipts produced at the Prove stage import the profile and tie it to concrete lots and test results. From the stack’s perspective, hardware is just another policy dimension: developers choose which profiles they accept; the market bids to satisfy them.
Integrate attested machines as one more backend. TEEs built from open profiles shouldn’t be magical; they should just be one of the ways the Prove layer can satisfy a claim. The enclave runs code, emits an attestation, and that attestation is wrapped in a SNARK or STARK so that everyone else can verify it like any other proof. The TEE is not the root of trust; it is a component in a larger proof.
On top of that, implement a minimal Layer-0/4 pattern:
-
Pick one open-ish hardware profile (e.g., commodity GPUs with well-understood drivers).
-
Define a hardware profile HID with:
- chip family, driver versions;
- basic entropy tests;
- power metering approach.
-
Stand up a proof factory cluster that:
- runs reference prover binaries in containers;
- exposes a simple RPC that PaL can target;
- emits attestation receipts tying proofs to hardware profile H.
-
Publish:
- basic telemetry (uptime, power use, proof throughput);
- VerifyPrice samples for a starter workload (e.g., MatMul at dimension n).
This doesn’t solve global hardware trust, but it creates a concrete Layer-0/4 instance that other pieces can integrate against. Hardware gives the proof markets something worth proving; the proof markets give the hardware world a reason to exist.
28.2 Privacy rails: making settlement safe, boring, and measurable
On the settlement side, the first job is to make non-custodial, privacy-preserving payouts so straightforward that both cypherpunks and treasurers treat them as infrastructure, not adventure.
An implementation path looks like this.
Standardize adaptor-signature swap flows. Take BTC↔XMR and BTC↔ZEC as canonical corridors and write down, once and for all, the message formats, time-lock conventions, refund procedures, and failure states. Wrap them in a minimal atomic-payout kernel that accepts three things (a destination asset, an amount, and a refund address) and emits either (a) a completed swap with receipts or (b) a structured failure with all funds safely returned.
A very small corridor prototype is enough to start:
-
One BTC full node + signer.
-
One ZEC/XMR node + signer.
-
A coordination daemon that:
- accepts intents: “swap X BTC for Z amount of ZEC within bounds B”;
- orchestrates adaptor-signature or HTLC patterns;
- enforces time-locked refunds.
-
A basic treasury tool that:
- ingests CSV or API feeds from payroll/ERP;
- compiles a batch of corridor swaps;
- tracks receipts and statuses;
- exposes selective-disclosure reports (e.g., “total paid, to how many pseudonyms, under which policies”).
This gives you:
- a real corridor with non-custodial, refund-safe settlement;
- enough instrumentation to plug into VerifySettle;
- something treasurers and auditors can actually touch and critique.
Ship wallet UX with recovery, not bravado. Build thin, audited wallet components around that kernel: clear progress indicators, explicit timeouts, and visible “abort and refund” buttons. Errors should produce human-readable explanations and machine-readable telemetry (“liquidity exhausted,” “counterparty offline,” “fee spike”), not just a red X. The UX goal is simple: after a handful of runs, a non-expert should trust that the worst case is a refund, not a vanished balance.
Document settlement latencies and success rates. For each corridor, publish historical p50/p95 time-to-finality, success rates, and failure breakdowns. Enterprises need this for policy: they will accept a 45-minute p95 if refund safety is 100%, but not if “hung trades” are common. These metrics are part of the same Telemetry layer as VerifyPrice; they are what turn privacy rails from ideology into a service level.
Add lawful-privacy scaffolding. For institutions, add viewing-key support, receipt exports, and simple “audit bundles” that can be handed to an internal risk team or regulator. The point is not to invite surveillance back in; it is to show that even with privacy by default you can prove that obligations were met and funds were not misappropriated, without reintroducing custodians.
Once these rails are in place, the higher layers only ever see a function call: “settle this receipt for X in asset Y under privacy budget Z.” The messy choreography of swaps and shielded pools collapses behind that call. Settle becomes a reliable stage in the loop, not a bespoke integration every team must hand-roll.
28.3 Proof factories: receipts as a service
The next implementation target is the Prove stage: turning raw work into portable receipts that any chain, app, or institution can consume.
In practice this means building a Proofs-as-a-Library SDK (PaL) and a minimal proof factory.
Unify proof outputs under a receipt schema. Start by defining a compact, chain-agnostic receipt format (your PIDL): the claim hash, workload ID, circuit or model hash, proof hash or transcript commitments, SLA tier, start/end timestamps, resource usage, and provider signatures. Whether the underlying artifact is a MatMul transcript, a logits fingerprint, or a SNARK, all of them fit into this envelope.
Expose a single programming surface. For developers, the SDK should offer one set of verbs: declare_claim, request_proof, await_receipt, verify_receipt. Underneath, it can target multiple proof systems and networks: MatMul-PoUW chains, PoL-style inference markets, general ZK clouds, or open-TEE clusters. The developer doesn’t wire to “Platform X”; they attach their claim and policy to the SDK and let it find anyone who can satisfy the SLA.
Concretely, this is the PaL SDK + PIDL library + market interface:
-
PaL SDK:
-
Language bindings (Rust/TypeScript/Python) exposing:
prove_compute,prove_provenance,prove_settlement.- descriptors for canonical workloads.
-
Automatic PIDL receipt construction.
-
Integration with proof factories / PoUW miners via neutral routers.
-
PIDL library:
-
Types and schemas for receipts:
- claim type, workload ID, proof ID;
- hardware profile hints;
- VerifyPrice snapshots;
- optional compliance metadata.
-
Encoding/decoding to standard formats (e.g., CBOR/JSON with signatures).
-
Market interface:
-
Simple contracts/APIs for:
-
posting “buy” orders for proofs/compute;
-
posting “sell” capacity as Work Credits or similar;
-
tracking filled orders and receipts.
-
With these in place, builders can:
- bolt proof verification and privacy rails onto existing apps;
- treat the triad as services, not as a new religion;
- create applications that implicitly grow demand for Privacy, Proofs, and Compute.
Integrate multiple zkVMs and proof systems from day one. Monoculture is the enemy of both security and economics. The factory should be multi-ZK and multi-backend by construction: support at least one pairing-based SNARK, one STARK or FRI-based system, and one zkVM at launch, with a clear path to adding more. Selection logic should consider not only price and latency but also decentralization metrics and hardware profiles.
Make telemetry a first-class product. Every proof request and receipt should emit metrics into the VerifyPrice Observatory: p50/p95 verifier times and costs for each workload, failure modes, queue depths per backend, variance under adversarial mixes. If a prover or network begins to cheat on SLAs, the factory should be able to detect it quickly and route around them.
Over time, this factory becomes an invisible utility. AI chains, rollups, oracles, and auditors all tap into the same receipt pool, and the receipts themselves start to look like commodities: standardized units of “truth” that anyone can price, escrow, or save.
28.4 Compute consensus pilots: Duplex, Ambient, Nockchain as patterns
With privacy rails and proof factories in place, you can begin to anchor consensus itself in useful work. Here, it is helpful to think in terms of reference pilots, not one chain to rule them all. Duplex, Ambient, and Nockchain offer three distinct patterns:
MatMul PoUW pilot (Duplex-style). Launch a devnet where the block-making puzzle is a MatMul instance drawn from a canonical distribution. Use the workload registry/WF-ABI to describe tasks; derive instances from header randomness; implement low-rank noise so that miners can return both a correct product and a succinct proof. Wire block validity to the presence of a valid MatMul receipt whose VerifyPrice parameters are below published thresholds. Instrument everything: r(W) ratios, verifier times on consumer hardware, time-to-first-proof for new miners, and centralization metrics.
Verified-inference pilot (Ambient-style PoL). In parallel, build a network (or a module that any network can adopt) where the work function is “serve model inferences with proofs of honesty.” Commit to models and datasets; design logits-fingerprint schemes that are simple, deterministic, and uniform; add randomized audits and peer-prediction-based slashing. Here full ZKML may be too expensive, so hybrid verification is acceptable, but only if it is exposed honestly: publish honest-output rates, audit coverage, and failure patterns as SLOs, not marketing.
zk-Proof-of-Work / receipt ledger pilot (Nockchain-style). Finally, either integrate with or prototype a chain whose primary job is to produce and timestamp zk proofs themselves. This network can serve as a common receipt ledger: proofs from many domains are anchored here with minimal metadata and availability guarantees. Its own work function might be “produce a zk proof of a simple, fixed circuit,” or could be tied to verifying receipts from other domains. Either way, it is where proofs go to be found and ordered.
These pilots need not (and probably should not) be over-promised as final destinations. Their job in the early years is to act as laboratories: to establish that verification asymmetry holds in practice at scale, that decentralization can be preserved under useful-work mining, and that receipts remain cheap to check even when adversaries are paying attention. They give the thesis something concrete to point at: “this is what it looks like when block rewards pay for FLOPs and proofs instead of heat.”
28.5 Developer kit: make “import proofs” the default
The last implementation sketch is about developer experience. If every application needs a team of cryptographers and protocol engineers to participate, nothing scales.
The basic shape of a developer kit is already implicit in the loop; making it explicit helps:
Claims & proving library. A library that plugs into common languages and frameworks and exposes a simple API: mark this function as “must be proven,” mark this data stream as “must be provenance-tracked,” specify acceptable backends and SLAs, and let PaL handle task generation and proof requests.
Settlement adapter. A component that connects to existing wallets and key-management setups, and that exposes “pay-for-proof” and “pay-for-compute” as simple intents: no manual swap logic, no bespoke bridges, just a predictable interface plus receipts, backed underneath by PRK and the atomic-payout kernel.
Verification module.
Modules that can be compiled into smart contracts, browser bundles, and command-line tools, so that end-users and auditors can check receipts without standing up new infrastructure. verify(receipt) should be as embedded in the tooling as log() is today.
The work of the next few years is, in large part, to make these three pieces boring. When “import proofs” feels like “import TLS,” and when “settle privately” feels like “call the payments API,” the cypherpunk monetary stack stops being an argument and starts being the default way serious systems are built.
At that point, the rest of the thesis follows almost automatically. Open hardware defines machines whose behavior can be tied to math. Proof factories mint receipts the world can check. Compute consensus pilots show that blockchains can pay for useful work instead of waste. Privacy rails ensure that capital can move without being corralled. The stack interlocks: Create/Compute → Prove → Settle → Verify, instrumented by telemetry and grounded in verifiable machines. Out of that loop, the three primitives—Privacy, Proofs, Compute—start to behave like money.
Tip: hover a heading to reveal its permalink symbol for copying.