§16. Layer 2 – Distribution & Execution: Software Supply & Runtime
Copy/paste (plain text):
Jason St George. "§16. Layer 2 – Distribution & Execution: Software Supply & Runtime" in Next‑Gen Store of Value: Privacy, Proofs, Compute. Version v1.0. /v/1.0/read/part-iii/16-layer-2/ §16. Layer 2 – Distribution & Execution: Software Supply & Runtime
Software distribution is where lofty protocol guarantees meet the boring reality of phones, laptops, and routers that have to run code. In a repression cycle, app stores are throttled, DNS is poisoned, TLS is MITM’d, and “safety” policies become chokepoints; this surface becomes one of the first battlefields.
It is far easier to lean on an app store, certificate authority, CDN, or ISP than to break a new primitive. A regime does not need to break ZK or PoUW if it can simply stop you from installing the clients that speak those protocols. So the threat model here is mundane and brutal: push‑notification channels revoked, binaries silently replaced, TLS terminated at state CDNs, update servers geofenced or rate‑limited, and “emergency security directives” used to demand backdoored builds.
If privacy, proofs, and compute are monetary primitives, then the binaries that implement them must keep moving even when networks are hostile.
In the vocabulary of this thesis, distribution and updates are what keep the Create/Compute → Prove → Settle → Verify loop alive when the transport gets adversarial; they are how we preserve agency (privacy rails), keep receipts honest (proofs), and maintain capacity (compute) in the face of filtering and outages. “No dashboards, no trust” applies just as much to updates as to proofs: liveness and authenticity have to be measurable in public, not asserted.
16.1 Threat model & objectives
An adversary’s repertoire at this layer is mundane and brutal:
- App‑store removals and policy bans.
- DNS poisoning, SNI and IP blocking, and TLS interception via “trusted” national CDNs.
- BGP hijacks or route blackholing for update servers.
- Certificate revocations, or captive portals that substitute their own “secure updates.”
- Targeted developer coercion and “emergency directives” that demand jurisdiction‑specific backdoors.
The objective, then, is not to be clever; it is to remain boringly available under those conditions. We want a world where updating a prover, wallet, or router feels as uneventful as refreshing a browser, even when DNS lies, some CDNs are hostile, and the app store has delisted you. That is why distribution should be treated as a first‑class design surface with explicit objectives and SLOs, rather than an afterthought tacked on after the protocol work is done.
These are just the software‑distribution versions of the macro and censorship tools described earlier: the same forces that weaponize yield curves and surveillance also weaponize app stores and CDNs.
Our objectives are correspondingly simple:
- Reachability: users can discover and fetch releases despite network interference.
- Authenticity: users can verify what they fetched without trusting a platform.
- Safety: upgrades are atomic, reversible, and survive power loss or packet loss.
- Neutrality: access is open; no single jurisdiction, vendor, or relay can quietly gate updates.
- Telemetry without doxxing: reliability and capture risk are measurable in public, without building a surveillance layer.
Just as we defined SLOs for VerifyPrice, we should define SLOs for updates and publish them. For each client type, we want to know: p50/p95 metadata fetch latency, artifact fetch success rate, signature‑check failure rate, and rollback rate. If “verify, don’t trust” is our credo, then a distribution pipeline that cannot be measured is a liability.
16.2 Design rules (what must be true)
The design rules for distribution are the “verify, don’t trust” credo translated into the mechanics of shipping binaries.
Authenticity before reachability. We assume that any transport can be subverted. Updates are treated as signed artifacts plus receipts, not as “whatever came from this host.” Concretely:
- Releases ship with threshold signatures (e.g., 3‑of‑5) from independent maintainers.
- Inclusion in an append‑only transparency log is required; an update without a log proof is treated as suspect.
- A small, stable predicate decides whether a candidate update is acceptable:
where the receipt binds the release id, commit digest, build recipe, target platform, signatures, and transparency‑log proofs. Verification must be cheap enough to run on a laptop or inside a contract.verify(update_receipt) → {accept, reject}
Reproducible builds as default. We do not ask users to believe that a binary matches a public repository; we give them evidence. At least one independent builder reproduces the binaries from the tagged source, and the build process emits:
- A deterministic artifact hash.
- A Merkle or zk proof tying that hash to the source commit and build recipe.
This is the same pattern we use for computation: turn “trust me” into a PIDL‑style receipt that can travel and be checked anywhere.
Threshold keys and real revocation. Root keys live offline and are sharded; release keys rotate regularly and their validity is bounded in time. Revocation is not a tweet; it is a signed artifact in the transparency log. Clients check revocation status whenever they fetch metadata.
Binary transparency anchored to a proof ledger. The update log itself must be auditable. It behaves like a certificate transparency log for releases, periodically anchored to a neutral proof ledger (e.g., a zk‑PoW receipt chain). That gives us long‑lived evidence of what was published, when, and with which keys; evidence that does not depend on any single company’s archive.
Atomic updates with safe rollback. Clients update into an inactive slot (A/B or A/B/C schemes), flip a pointer only after verification and basic health checks, and revert automatically on failure. A half‑delivered or malicious update becomes an inconvenience, not an extinction event.
Graceful degradation. When online channels fail, offline packages (USB, SD, QR, radio) use the same receipts and keys. When telemetry is blocked, clients still verify locally with bounded CPU and no external lookups. The sophistication of the transport can degrade; the authenticity invariant must not.
Neutral payment for distribution. Mirrors and relays may be paid, but always via non‑custodial, privacy‑preserving routes, leaving behind receipts that auditors can check. That keeps incentives aligned without turning distributors into financial chokepoints.
The practical test of these rules is: what does an attacker have to do to push a malicious build to millions of users? If the answer is “serve a different ZIP from the right hostname,” the design has failed. If the answer is “coerce several signers in multiple countries, bypass reproducible builds, and have every client ignore a revocation artifact,” the bar is high enough that cryptography matters again.
16.2.1 Minimum viable supply-chain invariants
The design rules above describe best practices. This box specifies the minimum invariants that must hold for the thesis to remain credible. If any of these fail, the distribution layer becomes a single point of failure.
Supply-Chain Invariants (Must Hold)
I1. Authenticity: Every release requires threshold signatures (≥3-of-5) from independent maintainers in ≥2 jurisdictions, plus inclusion in a transparency log.
I2. Freshness: Clients enforce monotonic version counters. Rollback attacks (serving old, vulnerable versions) are detected and rejected.
I3. Reproducibility: At least 2 independent builders reproduce each release from source. Divergence between builders is a release-blocking event.
I4. Compromise response: If a signing key is compromised:
- Revocation artifact published to transparency log within 24 hours.
- Affected releases quarantined.
- Clients reject artifacts signed only by compromised key.
- Post-mortem published within 14 days.
I5. Multi-path availability: Releases are available via ≥3 independent distribution channels (e.g., HTTPS, Tor, IPFS, satellite). No single channel’s failure blocks updates for >24 hours.
Quantitative thresholds:
| Invariant | Metric | Threshold | Consequence if Breached |
|---|---|---|---|
| I1 | Releases with <3 signatures | 0% | Release invalid; clients reject |
| I2 | Rollback attempts detected | Logged and rejected | Alert to governance; investigation |
| I3 | Builder divergence | 0 per release | Release blocked until resolved |
| I4 | Time to revocation | <24 hours | Post-mortem required; process audit |
| I5 | Channel availability | ≥2 channels up at all times | Incident response triggered |
Privacy constraints on distribution telemetry:
Distribution health requires telemetry, but telemetry must not become a surveillance layer.
- No per-device identifiers: Telemetry uses randomized, rotating identifiers that cannot be linked across sessions.
- Aggregation thresholds: Data is published only when ≥100 devices contribute to a metric (k-anonymity).
- Delayed publication: Regional metrics are delayed by 6–24 hours to prevent real-time deanonymization.
- Mixnet submission: Telemetry reports are submitted via mixnet or Tor where available.
- Opt-out available: Users can disable telemetry entirely; this is logged (in aggregate) but not penalized.
These constraints ensure that “no dashboards, no trust” does not become “dashboards enable tracking.”
16.3 A reference distribution architecture (multi‑home by design)
The architecture sketched here is deliberately multi‑homed. We assume that any single discovery channel or artifact host can be turned into a chokepoint, so we make sure that no single channel is indispensable. No individual DNS zone, CDN, app store, or messaging system can be “the” distribution path.
Control plane (discovery and metadata). The control plane answers the question: what is the latest version, and what does it look like? It can include:
- Standard HTTPS endpoints behind diverse DNS providers (with DNSSEC and DoH/DoT fallbacks).
- Tor onion services that expose the same manifests.
- Signed release announcements gossiped over generic protocols (Matrix, Nostr, email with PGP/age‑signed manifests).
- An append‑only transparency log whose roots can be checked by light clients and occasionally anchored to a proof ledger.
HTTPS with ordinary DNS is the happy path, not the only path. Clients should know how to fall back to DoH/DoT, Tor onion services, gossip‑based announcements, and even email‑style transports when the “normal internet” becomes hostile, blocked, or tampered with.
Data plane (artifact delivery). The data plane answers: where do I get the bytes? It should be built around content addressing:
- Traditional CDNs and anycast mirrors in multiple jurisdictions.
- P2P swarms (IPFS‑like or BitTorrent‑class) that use the artifact hash from the manifest as the address of truth.
- Community micro‑mirrors: any organization can host a partial mirror by serving a signed manifest of what they provide.
- Offline channels:
- USB/SD bundles distributed via community hubs.
- QR‑encoded update chunks for environments where only paper can cross a boundary.
- Low‑bandwidth mesh or radio relays where lawful.
All of these are interchangeable from the client’s perspective. Whoever delivers the bytes fastest wins, but no one is trusted beyond the hash: the client verifies the final digest and only then installs into the inactive slot.
Client update engine. On the client side, this reduces to a small, auditable state machine:
- Fetch metadata from whatever control‑plane sources are reachable.
- Verify signatures and transparency proofs.
- Select artifact sources based on past reliability and current reachability.
- Stream and resume artifact downloads, verifying chunk hashes and final digest.
- Stage, test, and commit or roll back.
- Emit coarse, privacy‑preserving telemetry about success or failure.
Each successful update produces an Update Receipt, a PIDL‑compatible record that describes what was installed, when, using which manifests and keys. It can be stored locally, anchored on‑chain, or supplied to auditors without revealing user identity.
Content addressing is the natural companion to our receipt‑centric worldview. If the hash of an artifact is already part of the signed metadata, then the network can be an untrusted swarm: whoever delivers the bytes fastest wins, and clients need only check that the final digest matches. Offline channels (SD cards passed hand‑to‑hand, PDF or QR kits posted in community spaces, low‑bandwidth mesh) sound anachronistic until they are the only channels left.
The point is not to romanticize sneakernet; it is to ensure that, even in a partial blackout, people can still get from “I have this hash” to “I have this binary and it verifies.”
16.4 Key management & release process (capture‑resistant)
Key management is where many otherwise sophisticated systems quietly re‑centralize. A single HSM in a single jurisdiction, guarded by a small team, becomes a soft power lever: compromise the humans or the box, and you own the distribution pipeline.
The release process described here is meant to de‑romanticize that bottleneck. Threshold signatures distribute trust across people and hardware vendors; transparency logs and freeze/unfreeze artifacts make governance decisions legible; and geographic dispersion of signers makes it harder to coerce the whole system at once.
A capture‑resistant release process needs at least:
-
A public release ceremony. Changes land on protected branches; CI produces deterministic binaries; independent builders confirm bit‑for‑bit equality. A quorum of maintainers signs the targets manifest and its inclusion in the transparency log, producing an update receipt.
-
Emergency freeze and unfreeze. Any maintainer can propose a freeze, but activation requires a threshold of signatures. While frozen, clients refuse new releases from the affected key set. Unfreeze requires an explicit, signed artifact explaining the incident and recording new keys.
-
Compromise playbook. Detect, revoke, rotate, and re‑sign last‑known‑good releases. All of these steps produce receipts that anyone can later audit.
-
Jurisdictional diversity. Key shards and signers are distributed across legal zones and vendor ecosystems, making it difficult for one government or one manufacturer to unilaterally subvert the process.
This is governance at the update layer, done in the same spirit as monetary governance in the main thesis: predictable, transparent, and backed by artifacts rather than press releases.
16.5 Update economics & neutrality
Distribution has an economics layer just as proofs and compute do. Bandwidth, storage, and operational effort are scarce. If we pretend they are free, we drift back toward “blessed” CDNs and corporate mirrors as the only practical infrastructure. Instead, we let mirrors be market actors: they meter what they serve, emit receipts, and get paid for honest work. That keeps incentives aligned; honest mirrors earn fees while lazy or malicious ones get slashed and sidelined.
This also keeps the routing layer honest. If all traffic flows through one corporate CDN, we have simply relocated the chokepoint. But if mirrors compete behind a standard receipt format and neutral router logic, we can cap house share, monitor concentration, and automatically steer traffic toward a more diverse set of providers. Updates become another instance of “pay only for work anyone can verify cheaply”: in this case, the work is reliable byte delivery.
Instead, we can treat distribution as a small, neutral market:
- Receipts for bytes: mirrors/relays export metered receipts (e.g., GB‑served of specific content‑ids) signed by their nodes; a broker aggregates them and pays out via privacy rails with SLA escrow and slashing for junk.
- Open admission: any party can become a mirror by publishing bandwidth and uptime commitments and staking a small bond (returned if no fraud).
- Anti‑capture telemetry: publish house‑share, top‑N mirror share, geographic/ASN diversity (same decentralization dashboards we require for prover markets). If a CDN dominates beyond thresholds, route weight decays automatically.
This makes distribution fit into the same pattern as PoUW and proof markets: pay for useful work that anyone can verify cheaply. Honest mirrors earn fees; lazy or malicious ones lose deposits or lose routing weight. The practical result is not a trading pit, but a quietly healthy ecosystem of diverse distribution providers.
16.6 Telemetry: “no dashboards, no trust” (but without doxxing)
Just as VerifyPrice turns vague assurances about “fast proofs” into concrete distributions of verifier time and cost, update telemetry turns “we’re fine” into a falsifiable claim. The goal is not to track individual users (that would undermine everything else in this thesis) but to publish enough anonymous aggregate data that anyone can see when distribution is being squeezed. The goal is to track the health of the system.
Clients can emit coarse, randomized counters about:
- Whether the last update attempt succeeded or failed.
- Which path class was used (CDN, P2P, offline).
- A bucketed latency to success or failure.
- Whether rollback was required.
These reports can be gossiped through mixnets, aggregated by independent observers, and displayed as public dashboards. The resulting metrics include:
- p50/p95 metadata and artifact fetch times.
- Success rates by path class and region.
- Rollback rates and signature‑failure rates.
- Concentration metrics for mirrors (top‑N share, house share).
- Time to recovery from incidents (freeze, compromise, or censorship event).
From an operator’s perspective, an update pipeline that cannot be measured is a liability. You cannot see whether a new censorship regime is silently throttling downloads in one country, or whether a particular path (say, Tor plus a P2P swarm) is degrading. From an allocator’s perspective, distribution health is part of the same risk profile as swap success and proof reliability. If the clients that talk to your network cannot reasonably stay up to date, then your receipts and SLAs are running on borrowed time.
16.7 Fallback playbooks (when, not if, things break)
The fallback playbooks are not edge cases; they are the moves that will eventually be required in production. The history of communication networks under authoritarian pressure is already full of examples: DNS poisoning, BGP hijacks, app‑store purges, and blunt‑force internet shutdowns. The right question is not whether these will be deployed against systems that carry real value, but when and how often.
Having explicit, rehearsed playbooks is how we keep those moments from turning into existential crises. When DNS breaks, clients know to lean more heavily on onions and content‑addressed swarms. When app stores purge binaries, there are signed sideload bundles and offline kits waiting. When keys are exposed, there is a script for freezing, revoking, and recovering that has been run before, in daylight, with the community watching. In all cases the pattern is the same: we trade some convenience for survivability, and we document those trades in advance.
We should therefore write down and practice a small set of moves:
- Soft network blocks (DNS/SNI/IP): bias clients toward alternative control‑plane channels (DoH/DoT, onions), and toward content‑addressed P2P for artifacts.
- Hard regional blocks (nation‑scale outages): rely on offline kits (USB/SD bundles, QR sets, radio transfers) seeded ahead of time through civic institutions, diaspora networks, and NGOs.
- App‑store bans: publish signed sideload bundles, with older SDK targets for legacy devices, and clear, localized instructions for installing them.
- Key compromise: hit the freeze button; revoke; rotate; and re‑sign last‑known‑good releases, with all steps recorded in the transparency log and receipts.
The goal is not to make these scenarios pleasant. It is to make them survivable and drillable. If the update pipeline behaves predictably under stress (falling back to slower but still verifiable channels), then the monetary stack that sits above it can keep functioning.
16.8 Lawful privacy & scope
The same architectural moves that make distribution censorship‑resistant also make it more legible to good‑faith auditors. Transparent logs, reproducible builds, and signed receipts give regulators and enterprise risk teams something objective to look at:
- Which releases were shipped, and when.
- Which vulnerabilities were patched, and how quickly.
- Whether there were jurisdiction‑specific forks or backdoors.
Crucially, they can look at that evidence without sitting in the middle of every update flow. There is no need for a regulator to become an on‑path decryption point, or for an app store to play silent gatekeeper, when receipts and logs already tell the story.
This is the blueprint for “lawful privacy” in distribution. Instead of backdoored auto‑updaters and vendor‑only telemetry, we offer verifiable guarantees: no silent downgrades, no undisclosed forks per jurisdiction, no ability for a single platform to push a special build to one region. Auditors get evidence; users keep agency. The system does not ask to be exempt from scrutiny; it insists that scrutiny be applied at the level of receipts and logs rather than taps and implants. No single platform can quietly push a special build to one region without leaving a cryptographic trail.
The same receipts that protect users also protect auditors and courts: you can prove what you shipped and when, without re‑centralizing trust in a single platform or jurisdiction.
Everything here is built to preserve lawful user agency under repression: authenticity, consented disclosure (viewing keys), and non‑custodial settlement. The design is architectural, not agitational: it removes chokepoints while leaving evidence.
16.9 What to publish (minimum viable public truth)
Finally, publishing distribution metrics is about aligning culture with claims. A project that says “verify, don’t trust” but treats its update pipeline as proprietary plumbing is telling you something about where the real power lies. By contrast, a project that routinely publishes update health, key fingerprints, log roots, and incident reports is inviting others to check its work.
For operators, these dashboards are operational tools: they show where to add mirrors, which regions are struggling, and whether a new censorship technique has landed. For allocators, they are underwriting inputs: they quantify the probability that users will be able to keep validating, proving, and settling over a five‑ or ten‑year horizon. For users, they are quiet reassurances that the stack they depend on has not silently drifted back toward “trust the vendor.” In that sense, distribution metrics sit alongside VerifyPrice and swap success as part of the same story: money backed not by promises, but by systems that can be inspected.
To make distribution part of the triad’s credibility rather than its blind spot, networks and projects should publish, continuously:
- Update health: p50/p95 metadata and artifact fetch times; per‑path success rates; rollback and signature‑failure rates.
- Transparency evidence: current log roots; inclusion proofs for each release; the last successful independent reproduction per build; anchor transaction identifiers on the proof ledger.
- Decentralization and capture metrics: top‑N mirror share; geographic and ASN diversity; entry latency for new mirrors; house share of any distribution broker.
- Key material and incidents: current root fingerprints; release‑key roster and expiries; revocation lists; incident reports for freezes, compromises, and key rotations.
These are the “VerifyPrice‑for‑updates” numbers. They do not guarantee perfection, but they make degradation and capture visible. A stack that can show, in public, how it handles distribution under stress is a stack that has earned the right to claim resilience. It is not asking for trust; it is presenting receipts.
Bottom line. Distribution is not a logistics afterthought; it is part of the security and monetary story. If binaries cannot move, privacy cannot be exercised, proofs cannot be trusted, and compute cannot be paid. Keep many roads open, but accept artifacts only when the receipts clear. In a repression cycle, your lifeline is not a particular host or app store; it is a verifiable path from signed source to installed binary that anyone can traverse and anyone can check.
Build updates the way we build the rest of the stack: permissionless to fetch, cheap to verify, auditable in public, and hard to gate.
In combination, these practices turn software distribution from a fragile, centralized choke point into another domain where the thesis applies: pay only for work anyone can verify cheaply; keep verification public; and expose enough telemetry that repression and capture have nowhere quiet to land.
Tip: hover a heading to reveal its permalink symbol for copying.