§4. Threat Model: Adversaries, Assets, and Assumptions
Copy/paste (plain text):
Jason St George. "§4. Threat Model: Adversaries, Assets, and Assumptions" in Next‑Gen Store of Value: Privacy, Proofs, Compute. Version v1.0. /v/1.0/read/part-i/4-threat-model/ §4. Threat model: adversaries, assets, and assumptions
Why a threat model? This thesis argues that Privacy, Proofs, and Compute can function as monetary primitives when verification is cheap and public. A threat model makes that claim falsifiable. It defines what we must protect, who is trying to break it, where the trust boundaries lie, and which metrics decide if the system remains credible under pressure. It sits between first principles and implementation so that every later design choice can be traced to an explicit adversary and invariant.
Every architecture is, implicitly, a bet about who will attack it and how. Up to this point we have argued from first principles: what a store of value must survive, and why Privacy, Proofs, and Compute can be engineered to meet those conditions. The next step is to be explicit about the pressure this stack will live under.
The backdrop is already in view. Macroeconomic arithmetic makes financial repression attractive: debt stocks in the hundreds of percent of GDP, negative real rates by policy, capital controls and captive balance sheets as standard tools rather than emergency measures.
At the same time, the web’s trust default has flipped from “trusted unless flagged” to “untrusted unless proven” as deepfakes and coordinated moderation make memory itself a soft target. The combination is simple and brutal: states are incentivized to squeeze balance sheets and communications; platforms are incentivized to mediate reality; hardware and networks are increasingly treated as levers of policy, not neutral infrastructure. Against that field, a “next‑gen store of value” is not being asked to survive price volatility; it is being asked to survive a world that optimizes for control.
4.1 Scope and assets
The scope is the full Create/Compute → Prove → Settle → Verify loop and its hardware base layer. At minimum, we need to protect four things:
-
Integrity of receipts & state. Proofs, provenance, settlement artifacts, and consensus state must not be silently corrupted.
-
Confidentiality of flows. Private settlement with optional disclosure, not privacy that evaporates at the first audit request.
-
Availability & neutrality. Open admission for provers and users, anti‑capture, and non‑seizable settlement paths.
-
Verifiability economics. Verification must remain cheap in absolute and relative terms; otherwise “anyone can verify” collapses into “someone we have to trust.”
These map directly to the SoV properties enumerated in §3. The assets at risk are not just balances on ledgers. They are the whole loop and the matter it runs on.
- At the bottom lie machines: chips, RNGs, secure elements, TEEs, capture devices, modems, routers, and the power infrastructure that feeds them.
- Up one level sit the proofs and receipts that describe what those machines did: provenance attestations, computation proofs, settlement artifacts, audit trails.
- Above that are the flows of value and obligation that ride on those receipts: private payrolls, cross‑chain settlements, inference contracts, collateral arrangements.
- Threaded through everything are the communications substrate that carries claims and proofs, the software distribution channels that keep clients coherent, and the identity and key material that bind actions to actors without doxxing them.
All of these must remain sufficiently intact that savings, truth, and compute can continue to clear when old guarantees fail.
4.2 Adversary classes (capabilities & goals)
We care about several classes of adversaries:
-
State‑level repressors. Impose negative real yields, capital controls, and hardware mandates; gate on‑/off‑ramps; require closed attestation; seek to turn banks, clouds, app stores, and IXPs into enforcement arms.
-
Platform cartels. Strip or obscure provenance; collude on labels; prefer vendor‑mediated “trust” and walled‑garden verification.
-
Economic attackers. Miner/prover cartels, router capture, MEV/censorship, liquidity games on swap corridors, front‑running and soft‑forking that tilt rules in their favor.
-
Hardware/supply‑chain adversaries. Trojans, biased RNGs, covert debug paths that turn “proof” into theater; opaque TEEs whose attestation keys terminate in unaccountable HSM farms.
-
Cryptanalytic / PQ attackers. Proof forging or signature breaks; long‑horizon post‑quantum risk.
-
UX / vaporware risk. Spec‑drift and unverifiable performance claims that corrode credibility, even if the cryptography is sound.
Of these, the primary adversary in this frame is not a cartoon hacker but the rational sovereign under stress. A state that must service promises larger than its productive base will reach, as history shows, for the levers it actually controls: interest‑rate caps, yield‑curve control, capital controls, regulated custody, and mandatory “secure” hardware and identity schemes. It will be tempted to turn banks, clouds, app stores, hardware vendors, and IXPs into enforcement arms: require real‑name registration at the edge; throttle or shut down networks in the name of stability; mandate TEEs whose attestation keys terminate in state‑controlled HSMs; define “safety” as surveillance.
In parallel, platforms facing regulatory and reputational risk will centralize moderation and provenance: which media is shown, which credentials count, which proofs are recognized. The danger is not that any one actor becomes an obvious villain, but that their combined incentives re‑create a soft but totalizing chokepoint architecture.
Beneath that sovereign‑platform axis are economic adversaries: miners, provers, routers, and liquidity providers who prefer rent to work. They will collude if they can, capture matching engines and orderflow, quietly prioritize their own routes, soft‑fork the rules in their favor, or simply withdraw service when it suits them.
If verification becomes expensive or gated, a priesthood of “trusted verifiers” will emerge, and with it all the familiar pathologies of rent extraction and arbitrary censorship.
And below them, like bedrock or landmines, are hardware and supply‑chain adversaries: fabs and vendors (sometimes complicit, sometimes merely compromised) who can bias randomness, slip trojans into IP blocks, leave “debug modes” wired to secrets, or ship opaque enclaves that amount to remote‑controlled kill switches. If the machine can lie, the proof becomes theater.
4.3 Threats by loop stage (and the intended mitigation)
We can slice the threat surface by stage in the Create/Compute → Prove → Settle → Verify loop:
-
Create/Compute.
- Threats: poisoned models, mislabeled workloads, biased hardware profiles, hidden accelerators.
- Mitigation: canonical workload registries, hardware profiles bound to receipts, open benchmarking, attested hardware provenance.
-
Prove.
- Threats: junk proofs, prover cartels, selective service, “house‑only” pricing.
- Mitigation: multi‑ZK adapters, neutral routers with fairness tests, SLA escrows & slashing; publishing entry‑latency, top‑N share, and geography/ASN distributions.
-
Settle.
- Threats: corridor censorship, refund failure, deanonymization, bridge compromise.
- Mitigation: adaptor‑signature swap kits, lawful‑privacy corridors (viewing keys + auditable receipts), bridge‑safety templates, mandatory refund‑safety, public VerifySettle metrics.
-
Verify.
- Threats: verification‑cost creep; opaque clients; “trust our node” monocultures.
- Mitigation: laptop‑grade verifiers, reference harnesses, verify predicates embedded in SDKs, public p50/p95 telemetry via the VerifyPrice observatory.
-
Telemetry & Governance.
- Threats: silent centralization, KPI gaming, governance capture.
- Mitigation: open dashboards for VerifyPrice, VerifyReach, VerifySettle; decentralization metrics (house share, geo/ASN spread, time‑to‑first‑proof); incident reports; hard rules about what must be public before changes ship.
-
Layer 0.
- Threats: “trust the vendor” cliff, mandatory closed TEEs, unmeasured side‑channels.
- Mitigation: open RTL or microarchitectures where possible, lot sampling and imaging where not, SNARK‑wrapped attestations, and explicit side‑channel budgets.
Another view is by plane rather than loop stage:
- Communications plane. DPI, SNI filtering, BGP manipulation, and blunt‑force shutdowns all aim to make the network unreachable when it matters most.
- Hardware plane. Closed designs and supply‑chain opacity threaten to turn cryptography into ritual.
- Proof & accountability plane. Prover cartels, biased routers, unverifiable performance claims can turn receipts into marketing.
- Settlement plane. Bridges, custodians, and on‑/off‑ramps can be throttled or deputized until “bearer” assets behave like permissioned IOUs.
- Work plane. Useful‑work mining and verified inference can silently degrade into “whoever owns the strangest accelerator and tightest regulatory capture wins.”
Each plane can be attacked directly, and each can be used as leverage on the others.
Taken together, these attackers are all trying to break the same design North Star: pay the machine only for work anyone can verify cheaply, and give humans privacy by default with auditability by consent. Everything else in this thesis is just scaffolding around that rule. Wherever the loop can be subverted (by making work useless, verification expensive, or privacy and settlement gateable), you get repression rather than neutrality.
4.4 Mapping the authoritarian playbook → countermeasures
We can sketch several canonical “playbook moves” and the corresponding counter‑moves. The table below maps real‑world censorship and surveillance tactics to the stack’s countermeasures:
| Playbook Move | What It Breaks | Counter |
|---|---|---|
| DNS/IP/SNI/QUIC filtering & active probing (GFW uses DNS poisoning, SNI blocking, probing) | Reaching verifiers/bridges; provenance fetch | Default to Tor Snowflake/obfs4, refraction networking where available; pin ECH; fall back to domain‑front‑free paths. |
| Domain fronting curtailed by major clouds | ”Collateral freedom” paths disappear | Don’t rely on fronting; use Snowflake’s WebRTC and ISP‑partner decoy routing; keep multiple CDNs with content‑addressed updates. |
| App‑store pressure (HKmap, new takedowns) | Mobile distribution | Parallel update rails (direct, IPFS, USB/QR) + detached sigs; keep APK/IPA sideload guides ready. |
| Real‑name + SIM registration; unified network IDs | Pseudonymous ops | VC/anon‑cred kit; receipt‑based reputation; no SIM/face gates for access. |
| Government‑ordered shutdowns (record highs) | All of the above | Sat/mesh/sneakernet paths, store‑and‑forward receipts; pre‑provisioned peer lists; publish “blackout drills” results. |
| Backbone/seabed sabotage | Regional isolation | Multi‑landing routing, regional proof markets, diaspora relays; monitor cable incidents. |
| Bitcoin network metadata capture | Wallet/settlement mapping | BIP‑324 everywhere; BTC↔XMR/ZEC adaptor‑sig corridors with refund‑safety; publish corridor telemetry. |
Beyond these network‑level moves, the financial and institutional playbook includes yield-curve control, capital controls, mandatory “secure” hardware mandates, platform-mediated reality attacks, and proof priesthoods. For each of these macro attacks, the stack provides specific countermeasures that are detailed throughout the document and formalized as testable stress harnesses in §23.4.
The high-level pattern:
-
Yield‑curve control + captive balance sheets → triad assets with no fixed coupons, Work Credits priced by market demand for Privacy/Proofs/Compute rather than by decree.
-
Capital controls + KYC choke points → non‑custodial privacy corridors (BTC↔ZEC/XMR) that remain viable without centralized exchanges; VerifySettle metrics that make corridor health public.
-
Mandatory “secure” hardware + identity schemes → Layer‑0 verifiable machines with open or sampled designs; hardware‑profile receipts; identity schemes at Layer 3 that separate accountability from doxxing.
-
Platform‑mediated reality + provenance stripping → PIDL‑encoded receipts for media and computation; PaL to compile provenance claims into portable proofs that survive platform stripping.
-
Proof priesthoods + closed verifiers → laptop‑grade reference verifiers; multi‑backend proof systems; VerifyPrice observatories that publish verifier costs and failure rates.
The thesis responds by treating neutrality, privacy, and verifiability as service‑level objectives, not vibes.
Layer 0 is framed as verifiable machines because it is precisely here that a stressed sovereign will try to install repression tooling “below” the protocol, through mandatory TEEs, certified HSMs, and national‑security carve‑outs. The triad is framed as monetary base because repression will attack duration instruments and custodial rails first. VerifyPrice is introduced not as an aesthetic metric but as the economic hinge that keeps verification cheap enough that anyone, not just a priesthood, can check the receipts.
The Create/Compute → Prove → Settle → Verify loop is unpacked because each stage is a natural chokepoint that must be deliberately de‑choked:
- Open workloads and hardware profiles at Create.
- Multi‑ZK, open‑router proof markets at Prove.
- Non‑custodial privacy rails with refund‑safe swaps at Settle.
- Laptop‑grade verifiers and public dashboards at Verify.
Telemetry is elevated to its own layer because without public numbers (on verification cost, decentralization, corridor health, and swap success), claims of neutrality and censorship‑resistance are indistinguishable from marketing. “No dashboards, no trust” is not a slogan; it is the governance rule that keeps the system from quietly sliding back into gatekeeping.
At the same time, the threat model forces us to acknowledge assumptions. We assume:
- At least one family of open, sampled hardware profiles exists per canonical workload.
- Commodity cryptography remains unbroken or can be upgraded in time.
- Somewhere in the network there remain paths (over fiber, radio, satellite, or sneakernet) through which claims and receipts can flow.
- There is enough dispersed energy and hardware for independent verifiers to run.
We are clear about non‑goals. The stack does not certify metaphysical “truth” of media, only provenance and process. It does not promise perfect openness in fabs or chips on day one, only measurable progress and falsifiable claims. It does not abolish volatility or political risk; it channels them through verifiable economics rather than opaque decrees.
In that sense this threat model is simply the negative image of the North Star. “Pay the machine only for work anyone can verify cheaply” is not a poetic line; it is the design rule that each adversary will try to break in their own way. The rest of the document can be read as the counter‑move: verifiable machines to keep the silicon honest; Privacy, Proofs, and Compute wired into a loop that cannot easily be captured; VerifyPrice and settlement telemetry to keep verification and neutrality public; communications, identity, energy, and distribution layers treated as first‑class infrastructure rather than afterthoughts. Only when all of these hold together (under repression, not just in peacetime) do Privacy, Proofs, and Compute deserve to be called monetary primitives rather than a beautifully typeset wish.
4.5 Explicit tensions and boundary conditions
Two tensions within this thesis deserve explicit acknowledgment. Ignoring them would make the framework less credible; addressing them honestly strengthens it.
Tension 1: Privacy by default vs. coercion at the social layer
The concern: Lawful privacy proposes “optional disclosure via viewing keys.” But if disclosure becomes mandatory at chokepoints (exchanges, employers, landlords, visa applications), “optional” becomes fiction. The technical guarantee of privacy could be circumvented by social or legal coercion.
What the system can and cannot guarantee:
| Layer | Guarantee | Limitation |
|---|---|---|
| Protocol | No backdoors, no master keys, no escrow. Disclosure requires the holder’s key. | Cannot prevent a holder from being coerced to share their key. |
| Architecture | Non-custodial routes exist. Users can transact without intermediaries who could be compelled. | If all practical routes require KYC (e.g., fiat on-ramps), the non-custodial option may be theoretical. |
| Corridor design | Multiple asset paths (BTC↔XMR, BTC↔ZEC) with redundant liquidity in multiple jurisdictions. | If all corridors are choked simultaneously, exit paths narrow. |
| Telemetry | VerifySettle makes corridor health public; users can see which routes are viable. | Does not prevent coercion; only makes it visible. |
How the stack reduces coercion surface:
- Data minimization: By default, nothing is revealed. Coercion must extract keys, not merely subpoena records that already exist.
- Decentralized verification: No single auditor or platform must see all flows. Selective disclosure can be scoped narrowly.
- Multi-jurisdiction design: Corridors, mirrors, and provers are distributed so no single legal regime can compel universal disclosure.
- Exit optionality: Even under local coercion, users retain technical ability to move assets to less-coerced jurisdictions—if corridors remain open.
Honest acknowledgment: Lawful privacy is a technical property, not a social guarantee. It cannot prevent a sufficiently powerful adversary from coercing individuals. What it does is (a) raise the cost of mass surveillance (each disclosure requires individual coercion), (b) preserve optionality for those in less-coerced environments, and (c) make the coercion visible through telemetry rather than hidden in platform logs.
Tension 2: Verified compute vs. hyperscaler dominance
The concern: Compute is dominated by a handful of hyperscalers (AWS, Azure, GCP) and chip vendors (NVIDIA, AMD). If verified compute becomes valuable, won’t these incumbents simply dominate issuance and markets, making “decentralized” PoUW a fiction?
What the system offers vs. hyperscalers:
| Property | Hyperscaler Compute | Triad Verified Compute |
|---|---|---|
| Cost per raw FLOP | Lower (economies of scale) | Higher (proof overhead, smaller operators) |
| Verifiability | Trust the vendor | Anyone can verify receipts |
| Permissionlessness | Vendor can ban workloads, customers, regions | Open admission; no TOS-based exclusion |
| Censorship surface | Single legal entity; can be compelled | Distributed operators; no single chokepoint |
| Receipts | Vendor-issued invoices | PIDL receipts verifiable by anyone |
Why decentralized verified compute wins in specific scenarios:
-
Repression scenarios: When a hyperscaler is compelled to ban certain workloads or customers, decentralized alternatives remain available. The value proposition is not “cheaper” but “still available.”
-
Proof-heavy applications: For workloads where verifiability is the product (compliance proofs, provenance, audit trails), the receipt is the value—not raw FLOP cost. Hyperscalers don’t currently sell receipts; they sell capacity.
-
Certain procurement regimes: Governments, NGOs, and enterprises with sovereignty constraints may prefer compute that doesn’t route through foreign hyperscalers or vendor-controlled enclaves.
-
Composability with privacy: Hyperscaler inference is logged. Verified inference over privacy rails is not. For sensitive workloads (medical, financial, personal), the privacy premium justifies higher cost.
Honest acknowledgment: Decentralized verified compute will not beat hyperscalers on raw cost. It competes on verifiability, permissionlessness, and censorship-resistance. In a benign environment, hyperscalers win on price. In a repressive or high-stakes environment, verified compute wins on trust properties. The thesis bets that demand for these trust properties is structural and growing.
Telemetry response: If hyperscaler concentration in verified compute markets exceeds thresholds (e.g., >50% of VerifyPrice-tracked capacity from 3 vendors), this is flagged in dashboards. The decentralization metrics in §22–23 make this drift visible, not hidden.
These tensions are not defeaters of the thesis; they are boundary conditions. The stack does not promise to solve all social and political problems. It promises:
- Technical guarantees that are real within their scope.
- Telemetry that makes boundary violations visible.
- Economic and architectural design that expands the scope of those guarantees over time.
If these boundary conditions tighten—if coercion becomes universal or hyperscalers capture all verified compute—the thesis weakens. But the design goal is to push against those boundaries, not to pretend they don’t exist.
Tip: hover a heading to reveal its permalink symbol for copying.