§17. Layer 3 – Identity & Claims: Humans & Machines Without Doxxing
Copy/paste (plain text):
Jason St George. "§17. Layer 3 – Identity & Claims: Humans & Machines Without Doxxing" in Next‑Gen Store of Value: Privacy, Proofs, Compute. Version v1.0. /v/1.0/read/part-iii/17-layer-3/ §17. Layer 3 – Identity & Claims: Humans & Machines Without Doxxing
Identity in a repression‑prone, AI‑saturated internet cannot be “a file with your name on it.” It must be capabilities you can prove on demand (age, jurisdiction, uniqueness, solvency, right‑to‑work, model ownership) without handing over the dossier. Reputation, in turn, should be the trail of receipts from prior correct behavior, not a database of personally identifying facts. Privacy is the default; proofs are the interface; compute (including open, attested machines) is the substrate that mints and verifies the receipts.
In earlier eras, “who you are” was anchored in a handful of slow‑moving artifacts: a passport, a bank relationship, a job. In today’s stack, most of the economically relevant identity surface is digital, high‑frequency, and platform‑mediated (login cookies, SIM registrations, device fingerprints, surveillance‑KYC files that you never see and cannot correct). The same forces that broke soft monetary guarantees are now eating soft identity guarantees (AI‑generated dossiers, breached databases, black‑box “risk scores” that determine access to payments or platforms with no recourse). If money is “memory with consequences,” the legacy identity stack is someone else’s memory, with consequences you can’t predict or audit. A cypherpunk monetary stack has to invert that: identity becomes your memory, exposed as proofs when needed, with consequences you can anticipate and revoke.
17.1 Why identity must decouple from doxxing
The web’s trust default has flipped: seeing is no longer believing, and platforms strip or bury provenance labels inconsistently. In that world, accounts and government IDs are brittle anchors; they leak power to gatekeepers and to whoever holds the database keys. What scales instead is mechanism over memory: claims backed by cryptographic evidence any honest party can check cheaply and without permission. This is the same shift the thesis makes for money (verification, not authority, is the arbiter), and it must be applied to identity.
On the monetary side, this thesis already requires lawful privacy by design and auditable receipts. Our identity layer inherits those same invariants: privacy by default, optional disclosure, and verification that stays cheap at the p95 tail (VerifyPrice), so reputation can travel across platforms without turning into surveillance.
The pressure to collapse identity into dossiers is not ideological; it is operational convenience. Real‑name laws make it easy to send subpoenas, freeze accounts, or plug transaction logs into risk models. Platforms embrace this because it reduces their perceived liability: “we knew who they were.” But this convenience is exactly what makes the system brittle under stress. As repression rises, the same databases that enable fraud detection become target lists; the same KYC stack that reassures regulators becomes the lever for de‑banking. If we keep that architecture and merely bolt on better wallets, we have not changed the game. The point of this layer is not to hide bad actors; it is to change the substrate from identity‑first to evidence‑first, so that the question “may this entity do X?” is answered by proofs of capability and behavior, not by lookup in a file.
17.2 Design goals (humans and machines)
Concretely, identity and reputation in this stack are built to satisfy six goals:
-
Privacy by default, proof by construction. There are no standing, globally queryable identity graphs. All predicates (“is 18+”, “is in EU”, “has paid taxes”, “owns model X”, “ran task Y on a verifiable machine”) are proven as needed, most often with zero‑knowledge or lightweight audit proofs.
-
Selective disclosure and unlinkability. Verifiers learn only the outcome of a predicate, not your name, address, or the rest of your credential bundle; repeated uses do not link unless you choose to present correlatable data (for example via a viewing key).
-
Cheap, public verification. Any honest laptop, contract, or chain can check the evidence quickly, measured the same way we measure proofs and compute, with VerifyPrice. Identity proofs are not special; they are just another workload with a p50 and p95 target.
-
Open admission for issuers and verifiers. Anyone can issue attestations bound to their own reputation and slashing; anyone can verify. There is no clerical priesthood whose blessing is required to exist on the network.
-
Hardware honesty for machine identity. Where machines sign facts (sensing, proving, inference), claims are anchored in verifiable machines: open designs, sampled supply chains, and attestations that can themselves be wrapped in succinct proofs. Remote attestation becomes a public artifact, not a vendor‑run oracle.
-
Lawful privacy. Regulators and auditors get provable, time‑bounded visibility via receipts and viewing keys, not through custodial choke points. The same “lawful privacy” posture we require for settlement corridors applies to identity: disclosure without doxxing.
Read this as a symmetry with the SoV requirements. We insisted that a credible store of value must be credibly scarce, neutral, censorship‑resistant, verifiable, and lawful‑privacy compatible. Identity that wants to survive the same stressors needs the analogous properties.
17.2.1 Issuer pluralism without issuer anarchy
“Open admission for issuers” (goal 4) sounds like chaos: anyone can issue AML credentials, so criminals just issue their own. The resolution is that issuer sets are policy objects, not global permissions.
Issuer sets by policy context:
| Policy Context | Example Predicate | Issuer Set Definition |
|---|---|---|
| AML-screened | ”User is not on OFAC/EU sanctions lists” | Threshold of ≥2 from {Exchange A, Exchange B, Compliance Provider C, Regulator D} |
| Age-verified | ”User is 18+“ | Any licensed identity provider in user’s jurisdiction, or threshold of ≥2 from global providers |
| Accredited investor | ”User meets SEC accredited investor definition” | Licensed broker-dealer or accredited investor verification platform |
| Device integrity | ”Device is L0-B or higher” | Hardware attestation from ≥2 independent TEE vendors or open-source attestation |
How issuers enter/exit sets:
-
Stake: Issuers post collateral (WC-Base) proportional to the value of credentials they issue. If credentials are later found fraudulent, stake is slashed.
-
Audit receipts: Issuers must publish periodic audit receipts proving their verification processes meet policy requirements. Audits are conducted by independent third parties.
-
Slashing: If an issuer’s credentials are shown to be systematically false (e.g., issuing “not on sanctions list” to sanctioned entities), their stake is slashed and they are removed from relevant issuer sets.
-
Jurisdictional diversity: For high-stakes predicates (AML, accredited investor), issuer sets require representation from ≥2 jurisdictions to prevent single-government capture.
How verifiers select policy bundles:
- Verifiers declare which policy contexts they require (e.g., “AML_screened AND Age_verified”).
- The protocol resolves these to issuer sets.
- Presentations must include credentials from issuers in the relevant sets.
- Verifiers can add custom issuer requirements (e.g., “must include credential from Regulator D for EU users”).
Why this isn’t centralization:
- No single issuer is required for any predicate.
- New issuers can join sets by meeting stake and audit requirements.
- Issuer sets are transparent and auditable.
- Users can choose which issuers to use within the set.
This model preserves open admission while ensuring that predicates in regulated contexts are backed by accountable issuers.
17.3 The identity kernel: claims, credentials, receipts
We treat identity as a claim machine rather than a name registry.
Identifiers are cheap, ephemeral, and scoped. They look like pairwise keys or DIDs, not global user IDs. They are per‑relationship handles that can be rotated without losing underlying capability.
Credentials are issued by many parties (exchanges, employers, DAOs, universities, regulators, devices). Each credential is a capability statement (“member of AML‑screened set S until 2027‑12‑31”, “passed this exam”, “controls this revenue stream”), not a dossier.
Presentations are one‑off proofs of predicates over those credentials: membership, non‑membership, attributes in a range, conjunctions of simple statements. They are typically zero‑knowledge: the verifier learns that the predicate is true, nothing more.
Receipts are the portable artifacts the rest of the system cares about. Every accepted presentation produces a PIDL receipt (the minimal, chain‑agnostic object we introduced in the primitives) that binds the claim, the workload or circuit hash, the proof hash, the SLA tier, and timestamps. It may also carry a viewing‑key envelope for future audit. Receipts are the unit of reputation.
Another way to see this kernel is as a “foreign exchange desk” between human semantics and machine semantics. Humans think in stories: I have lived and worked here for years; I pay my taxes; I have never defaulted. Machines can’t price stories; they can only price predicates and receipts. The kernel turns the human story into three artifacts machines can use: a bag of credentials, a set of ZK‑style presentations, and a growing ledger of receipts. At no point does it require that the underlying story be flattened into a single row in a global database. Instead, each interaction carries just enough truth, in just the right format, to clear the transaction and update reputation.
This kernel compiles directly into the developer surfaces already proposed (PaL SDK to express “prove this predicate,” PIDL receipts to carry results, and lawful‑privacy corridors to pay without doxxing). It keeps identity portable across chains and institutions while avoiding permanent cross‑context linkability.
17.4 Human identity: personhood and compliance without dossiers
The temptation, especially in regulated domains, is to start from the list of things institutions “need to know” and work backward: date of birth, address, full legal name, tax ID. The kernel flips the direction: start from what the policy actually cares about (age range, jurisdiction, uniqueness, absence from a sanctions list) and implement that as a small portfolio of predicates (proofs). That change sounds cosmetic but has profound consequences. It means an exchange can prove “this customer is not on these lists and is in one of these jurisdictions” without ever seeing or storing the underlying identifiers in hot systems. The institution still has a legal relationship with the customer, but that relationship does not have to leak into every protocol message.
What to prove, privately:
- Uniqueness / Sybil resistance. Rate‑limited nullifiers, stake‑and‑slash, or attested‑liveness ceremonies provide bounded action rates per human without revealing who the human is.
- Eligibility. Age, residency, licensing, accreditation, or “member‑of a KYC‑screened set,” proved with set‑membership or range proofs.
- Standing & solvency. Proofs of reserves/income ranges or payroll continuity, revealed to counterparties under viewing keys during credit underwriting, not to the entire chain.
How it composes with the stack: A wallet invokes PaL to compile predicates to proofs, pays for the work over privacy rails (adaptor‑signature corridors, shielded pools), and gets a PIDL receipt the counterparty can verify on‑chain or in a browser in <5s p95 (target). Auditors get opt‑in visibility via viewing keys, exactly the “lawful privacy by design” requirement.
17.4.1 Sybil resistance menu
Sybil resistance (preventing one entity from creating unlimited fake identities) is a core challenge. Different mechanisms offer different tradeoffs; the stack supports a menu of approaches rather than mandating one.
| Mechanism | How It Works | VerifyPrice Impact | Unlinkability | Best For |
|---|---|---|---|---|
| Rate-limited nullifiers | Each identity can perform N actions per epoch; nullifiers prevent reuse without linking | Low (hash + membership check) | High (actions unlinkable across epochs) | Spam prevention, voting, airdrops |
| Stake-based | Identity actions require WC collateral; bad behavior → slashing | Low (balance check) | Medium (stake addresses may be linkable) | High-stakes actions, market making, proving |
| Social/liveness ceremonies | Periodic video calls, in-person events, or social graph vouching | Medium (coordination overhead; ZK for privacy) | Low-Medium (ceremony attendance may leak) | Proof of personhood, high-value credentials |
| Device-based attestation | Hardware attestation limits identities per device | Low (TEE attestation) | Low (device fingerprinting risk) | Machine identity; bounded human use |
| Economic proof-of-work | Solving computational puzzles to create identity | Medium-High (computation cost) | High (puzzles don’t identify solver) | Bot prevention, rate limiting |
Tradeoff guidance:
- For maximum privacy: Rate-limited nullifiers + economic PoW. No linkage, but limited action rate.
- For regulated contexts: Stake-based + issuer-set credentials. Some linkage to stake address, but accountability.
- For machine identity: Device attestation bounded by L0 grade. Accept fingerprinting risk for hardware honesty.
- For proof of personhood: Social ceremonies with ZK proofs. Higher coordination cost, but strong Sybil resistance.
Composability:
Mechanisms can be combined. A high-stakes action might require:
- Rate-limited nullifier (prevents spam),
- Stake (ensures accountability),
- Issuer credential (meets regulatory requirement).
Each layer adds cost and reduces privacy; the protocol allows verifiers to specify requirements and users to choose compliant paths.
17.5 Machine identity: capability, not brand
Machines (provers, miners, sensors, model endpoints) need identity as much as humans do, but it should look nothing like a cloud vendor account. Their “identity” is a statement of capability on a particular hardware and software profile coupled with receipts for past behavior.
Machines transact by attesting to what they are and what they did.
When a device participates in this economy, it attests to what it is and what it did. Attestations link to open hardware profiles: public RTL and microarchitectures for simple key‑storage or TEEs, documented side‑channel budgets, sampling and imaging results for each lot. Those attestations are then wrapped in succinct (SNARK/STARK) proofs so any chain can verify them cheaply. A MatMul miner, a camera, or an inference endpoint thus emits two artifacts per job: a computation proof (or proof‑of‑logits transcript) and a machine attestation bound to that proof. Together they say, “this machine, running this code, produced this result.”
This is reputation for robots: SLA‑backed capacity that accrues a track record via receipts, not by vendor logo. It is exactly the role “verifiable machines” fits as Layer 0 for the whole architecture.
This is also how we prevent “brand attacks” from becoming systemic risk. In the legacy cloud world, if a single vendor’s attestation service is compromised or coerced, every system that depends on that vendor inherits the failure. In the verifiable‑machines picture, the unit of trust is not Vendor X but Profile P: a description of layout, side‑channel budget, sampling regime, and proof wrapper. Multiple vendors can implement the same profile; multiple fabs can produce chips that satisfy it. If one implementation goes bad, receipts tagged with that profile can be frozen or hair‑cut in risk models without tearing down the category. Machine identity is therefore less like a logo and more like a public standard with explicit, testable error bars.
17.6 Reputation: priced behavior, not personal data
Most existing “reputation” systems (credit scores, influencer scores, platform trust ratings) are really just hidden classifiers trained on PII. They work by hoarding as much data as possible, inferring patterns, and then publishing a single opaque number. This is convenient for platforms and lethal for anyone caught on the wrong side of a model. In a receipts‑based world, the scoring function becomes almost embarrassingly simple: how often did this handle do what it promised, under which SLAs, and how quickly did it make counterparties whole when something went wrong? Because the inputs are receipts rather than dossiers, everyone can recompute their own view of reputation, and no one needs to be trusted to hold the entire world’s private life in order to answer “should we work with them?”
- Positive reputation accumulates as fulfilled obligations with proofs: deliveries within SLA, honest inferences, verified matmul jobs, timely private settlements, clean audits. Each receipt carries latency, cost, and SLA metadata; these can be summarized per handle without exposing counterparties.
- Negative reputation accrues as rejects and slashes: failed audits, invalid proofs, settlement timeouts that were not refunded.
- Scope and decay are explicit: reputation is contextual (market/domain) and decays unless renewed, to avoid “permanent record” harms.
Because receipts are portable and verifiable, reputation becomes a market primitive. A prover can advertise, “over the last 30 days I delivered PROOF_2^20 within p95 < 1s at cost <$0.001 with 0.04% failure,” and anyone can check. A privacy router can publish swap success and refund safety. A device manufacturer can show the sampling and failure history of a given hardware profile. Pricing decisions (fees, limits, access) can be tied to these numbers without exposing the underlying humans or machines.
17.7 Patterns that keep identity private and usable
The patterns that emerge from this design are mundane but powerful:
-
zK‑KYC / zK‑AML. Users prove membership in a regulated allow‑list (managed by plural issuers) without revealing which issuer or which entry. Regulators can request targeted disclosures via viewing keys during investigations, but there is no standing database that every counterparty queries.
-
Rate‑limited actions. Per‑relationship nullifiers cap spam and abuse without linking handles. If abuse escalates, the victim can request a provable block (non‑membership proofs) that prevents that nullifier from affecting them again, without learning who was behind it.
-
Credit without doxxing. A lender demands a proof set (income range, payroll continuity, prior timely settlements) plus a bond; the loan contract auto‑slashes on missed proofs. No one learns the borrower’s legal name unless the escrow triggers or a court compels disclosure through a viewing key.
-
Creator & data provenance. Cameras and pipelines on open hardware sign origin; receipts embed these origin proofs in a way that survives platform stripping. Viewers verify local proofs; platforms become transport, not arbiters of truth.
-
Work credentials for models. Model stewards prove they own specific weights or commits and that inferences came from those weights under defined parameters (PoL/ZKML hybrids). Clients price the service by a VerifyPrice‑quoted SLA, not by brand claims.
These patterns are not hypothetical or science‑fiction; they are what it looks like to run ordinary workflows in a world that refuses to leak identity by default. A retail user will never say, “I used an anon credential system with range proofs today.” They will say, “I proved I was old enough to trade without uploading my passport again,” or “I got a loan without giving five different SaaS vendors access to my bank account.”
A corporate treasurer will say, “We proved this wallet is ours to the regulator, and no one else learned our internal structure.” The patterns are there so that the story people tell about using the system is boring and practical, not heroic and risky.
17.8 Failure modes & guardrails
Any identity layer that matters will eventually be attacked, both technically and procedurally. Bad actors will try to farm credentials, launder reputation, correlate pseudonyms, and pressure issuers to become quiet censors. That is not a sign of design failure; it is a sign that the layer has become real. What distinguishes a durable architecture from a toy is not the absence of failure, but whether failures are observable, bounded, and repairable without appealing to a central authority. The guardrails in this section are not add‑ons; they are the difference between a system that slowly re‑centralizes and one that holds its shape under pressure.
The main failure modes and guardrails are:
-
Centralized attesters. If one vendor or agency controls machine attestation or anon‑cred issuance, identity becomes permissioned by decree. The countermeasure is pluralism: open hardware profiles, lot sampling, and ZK‑wrapped quotes that any verifier can check; multiple issuers whose credentials are either interchangeable under policy or combined via threshold logic.
-
Proof cost creep. If the p95 verify time or cost for identity predicates drifts up, only gateways can afford to check receipts. Verification recentralizes, and we are back to “ask the platform.” The countermeasure is to treat identity workloads as first‑class citizens of VerifyPrice: publish dashboards per predicate, and treat regressions as Sev‑1 incidents.
-
Bridge and settlement doxxing. If corridors leak metadata, identity collapses into routing tables: “this cluster of addresses usually settles to this KYC’d off‑ramp.” The countermeasure is settlement designs with cryptographic atomicity (adaptor signatures, light clients), refund safety as an invariant, and public settlement telemetry (success rates, latency distributions, anonymity‑set sizes).
-
Reputation blacklists. Global “bad lists” are honey traps for political pressure. The countermeasure is scoped, revocable policies tied to evidence: lists of receipts that justify slashing or exclusion in a given context, and proofs of (non‑)membership that are themselves portable. Actors retain the ability to present counter‑receipts and move to other jurisdictions and markets.
If we cannot see where identity is failing, we cannot fix it. So, as with everything else in this stack, the mantra applies: no dashboards, no trust.
17.9 How it fits the modular stack
The final virtue of this design is that it does not require a new class of machinery. Identity and reputation are just another set of workloads in the same loop we use for everything else:
-
Create / Compute. A human or machine emits a claim: “I am unique and 18+,” “this device captured this video,” “this miner computed MATMUL_4096 under profile P.” Hardware profile (if any) is attached here.
-
Prove. The PaL SDK compiles the predicate into a circuit or useful‑work audit, routes it to a prover market (multi‑ZK, PoUW, open‑TEE cluster), and returns a PIDL receipt with timing and cost metadata.
-
Settle. The privacy rails kit executes refund‑safe payouts across BTC↔ZEC/XMR corridors or shielded pools, using viewing keys and audit bundles for entities that need to prove compliance.
-
Verify. Anyone runs
verify(receipt)locally, on‑chain, or inside another proof. The predicate either checks or it doesn’t. No API calls to platforms, no “trust us” dashboards. -
Telemetry. VerifyPrice, anonymity‑set size, swap success rates, decentralization stats, and hardware‑profile distributions are public.
Notice how little new machinery we need to add. Identity does not require inventing a fifth pillar; it requires reusing the same Create → Prove → Settle → Verify loop and the same primitives (PaL, PIDL, privacy rails, VerifyPrice) that already make privacy, proofs, and compute behave like money. That is a feature, not a shortcut.
When humans and machines identify and build reputation through the same receipt factory that mints monetary claims, the system gains a deep coherence: the thing that makes value credible is the same thing that makes actors credible.
The stack stops being “crypto plus KYC” and becomes one fabric where work, truth, and permission all pass through the same narrow gate: proofs anyone can check.
17.10 What to measure (identity/reputation SLOs)
For operators and allocators, identity becomes legible through a small set of metrics:
-
Predicate VerifyPrice. Median and tail verify time and cost per standard identity predicate (age range, residency proof, set membership / non‑membership). Targets mirror general proofs: p95 under a few seconds, sub‑cent cost, very low failure.
-
Unlinkability. Rate at which presentations can be linked by passive adversaries under controlled tests; regressions trigger design changes.
-
Settlement privacy. Corridor success rates, latency, refund safety, and changes in anonymity‑set sizes for shielded pools used in identity‑linked flows.
-
Hardware honesty. Share of machine‑signed receipts tied to open profiles and sampled lots; estimated cost to subvert a profile without being caught.
-
Reputation health. Fraction of pricing and access decisions based on receipts vs. account flags; compliance with scope and decay policies.
Those numbers complete the picture. Privacy and proofs supply the cryptographic primitives; compute and verifiable machines supply the substrate; VerifyPrice makes honesty the cheap equilibrium. Identity and reputation are then just the part of the story where humans and machines show up in the ledger: not as dossiers, but as actors whose capabilities and histories are written in receipts rather than biographies.
Identity ≠ name; it is a set of provable capabilities. Reputation ≠ biography; it is a ledger of receipts. Bind both to privacy by default, publish VerifyPrice so anyone can check them cheaply, and anchor machine attestations in verifiable hardware. Then humans and machines can transact globally (auditable when required, non‑seizable by design, and neutral to jurisdictional mood swings). This is identity for a monetary stack where privacy, proofs, and compute are the base commodities, exactly the posture this thesis argues we must build toward.
17.11 Implementation sketch (from the SDK)
All of the above can feel abstract until you see it at the SDK edge. In practice, identity and reputation show up in three simple patterns that developers wire into their apps using PaL, PIDL, and the privacy-rails kit.
Human → compliance without names. A wallet encodes a claim like:
prove(age ≥ 18) ∧ prove(residency ∈ EU) ∧ prove(member_of AML_screened_set)
It sends this to PaL, which compiles the predicates into a circuit, sends them to a prover market, and returns a PIDL receipt. The user then pays over privacy rails (e.g., a BTC↔ZEC/XMR corridor or shielded pool) with that receipt attached.
From the merchant’s perspective:
- They verify the receipt in-browser or in-contract in under ~5 seconds p95 (see VerifyPrice targets).
- They never see the passport or underlying identifiers; they learn only that the predicate was true.
- If needed, an auditor can later open a narrow viewing-key window that reveals just enough to satisfy policy, not the user’s entire history.
This is “zK-KYC/zK-AML” as a UX: “I proved I was allowed to trade without uploading my documents again,” not “I used a fancy ZK system.”
Machine → capability without brand.
A sensor on hardware profile open_camera_v1 captures video. The device emits an attested capture (Layer 0) saying “this frame came from this sensor under this profile.” PaL wraps that attestation in a succinct proof, producing:
- a provenance proof (canonical
Provenance(C, chain)workload), and - a PIDL receipt binding the proof to the hardware profile and time.
A newsroom (or platform) pays the camera over privacy rails when it receives valid receipts. Readers later verify provenance locally (even if intermediaries strip metadata) by checking the receipt and proof against the public profile for open_camera_v1.
The camera never registers an account or uploads a dossier; its “identity” is simply “a device of profile P that has produced N correct receipts.” Capability, not brand.
Reputation → receipts, not profiles. A prover’s handle accumulates receipts each time it fulfills a job:
- PROOF_2^20 with p95 verify-time 0.8s and cost <$0.001,
- zero slashes for invalid proofs,
- consistent SLA hits over 30 days.
When it advertises itself to buyers, it doesn’t send a glossy PDF or a LinkedIn page; it publishes a small summary:
“Last 30 days: delivered PROOF_2^20 under SLA Gold with p95 verify-time 0.8s, 99.9% success, zero slashes.”
Anyone can:
- pull the underlying PIDL receipts,
- run the reference verifier, and
- decide whether to trust this handle for a given fee or credit limit.
No personal data is needed; only priced reliability. Reputation is literally “what the receipts say,” recomputable by anyone using open tools (see Appendix C for SDK patterns).
Identity ≠ name; it is a set of provable capabilities. Reputation ≠ biography; it is a ledger of receipts. Bind both to privacy by default, publish VerifyPrice so anyone can check them cheaply, and anchor machine attestations in verifiable hardware. Then humans and machines can transact globally (auditable when required, non-seizable by design, and neutral to jurisdictional mood swings). This is identity for a monetary stack where Privacy, Proofs, and Compute are the base commodities, exactly the posture this thesis argues we must build toward.
§17.10 Part III Summary: Minimum Viable Stack and Telemetry Principles
Part III has covered four infrastructure layers. Before proceeding to the economic layers (4–5), we consolidate the minimum viable requirements and establish the telemetry reproducibility principle that prevents dashboards from becoming a new priesthood.
Minimum Viable Layers 0–3
The thesis does not require perfection on day one. It requires that each layer meet a minimal threshold that makes the monetary claims credible. If any of these fail, the SoV story collapses.
| Layer | Minimum Viable Requirement | Falsification Test |
|---|---|---|
| L0 (Verifiable Machines) | At least one L0-B grade hardware profile (sampled, reproducible firmware) for canonical workloads; sampling regime operational for L0-A profiles | If all proving/compute runs on L0-A with <20% sampling coverage, hardware trust is unbounded → thesis weakened |
| L1 (Reachability) | Encrypted P2P transport + ≥2 fallback transports (Tor, satellite, mesh); VerifyReach observatory with ≥3 independent operators | If core services are unreachable from >20% of sampled ASNs for >7 days without fallback, network is censorable → thesis fails |
| L2 (Distribution) | Threshold-signed releases (≥3-of-5) + transparency log + ≥2 reproducible builders + rollback protection | If a single key compromise can push malicious updates, or rollback attacks succeed, distribution is captured → thesis fails |
| L3 (Identity) | ≥3 identity predicates (uniqueness, eligibility, standing) with p95 VerifyPrice <5s; issuer pluralism rulebook operational | If identity requires centralized issuer or VerifyPrice exceeds 30s, identity becomes a chokepoint → thesis weakened |
How this becomes a deployable spec:
Each layer exports artifacts that higher layers consume:
- L0 → hardware profiles, attestations, FERs
- L1 → reachability metrics, transport diversity stats
- L2 → release receipts, builder attestations, update health metrics
- L3 → credential presentations, reputation receipts
If these artifacts are not available, verifiable, and cheap to check, the economic layers (4–5) have nothing to stand on.
Telemetry Reproducibility Principle
Telemetry is central to “no dashboards, no trust.” But telemetry itself can become a point of capture if it is controlled by a priesthood.
Principle: Telemetry Reproducibility
Telemetry must be reproducible from raw public artifacts by multiple independent parties, or it is not part of the trust base.
Specifically:
- Raw data availability: All inputs to telemetry metrics (proofs, receipts, attestations, measurement logs) are published or archived in a way that independent parties can fetch.
- Open methodology: Aggregation formulas, sampling methods, and statistical procedures are documented and versioned.
- Independent recomputation: At least 2 independent parties (not affiliated with protocol governance) must be able to recompute key metrics (VerifyPrice, VerifyReach, hardware grade distribution) and publish divergence reports.
- Disagreement handling: If independent calculations diverge by >5%, an investigation is triggered and published. Persistent divergence without resolution downgrades the metric’s trust status.
- No single source of truth: Dashboards aggregate multiple sources; no single operator’s data determines a metric.
Why this matters:
Without reproducibility, “dashboards” become “oracle claims by whoever runs the dashboard.” The thesis would then rest on “trust the dashboard operator,” which is no better than “trust the bank.”
Reproducible telemetry transforms dashboards from authority claims into consensus views that anyone can verify.
Monetary Consequences: How Layer 0–3 Failures Affect Value
To close the loop between infrastructure and economics, here are explicit monetary consequences when layers degrade:
| Layer | Degradation Scenario | Monetary Consequence |
|---|---|---|
| L0 | Hardware profile deprecated (S2 or higher) | WC minted under that profile: quarantined, 15% haircut, ineligible as collateral |
| L0 | L0-A concentration >70% of issuance | New L0-A issuance paused; risk premium applied to L0-A collateral |
| L1 | VerifyReach <70% in a region for >7 days | WC from operators in that region: 5% regional risk premium on collateral |
| L1 | Transport diversity <2 operational paths | Incident response; potential fee surcharges for affected corridors |
| L2 | Reproducible build divergence detected | Release blocked; post-mortem required; trust score for affected releases reduced |
| L2 | Single CDN serves >50% of updates | Neutrality risk flagged; discount rate on associated WC increases |
| L3 | Issuer set for critical predicate shrinks to <2 | Predicate marked “reduced assurance”; verifiers warned; stake requirements increase |
| L3 | VerifyPrice for identity predicate exceeds 30s p95 | Predicate downgraded from “cheap” to “expensive”; may be excluded from certain SLA tiers |
These are not punitive measures; they are risk-adjusted repricing that ensures the market reflects actual infrastructure health.
Tip: hover a heading to reveal its permalink symbol for copying.