§19. Layer 4 – Truth & Work: Proof Systems and PoUW
Copy/paste (plain text):
Jason St George. "§19. Layer 4 – Truth & Work: Proof Systems and PoUW" in Next‑Gen Store of Value: Privacy, Proofs, Compute. Version v1.0. /v/1.0/read/part-iv/19-layer-4/ §19. Layer 4 – Truth & Work: Proof Systems and PoUW
Layer 4 is the Prove/Verify spine of the stack.
Its job is to take arbitrary claims:
- “this media object came from camera C at time T”
- “this inference was computed by model M on input X with bounds B”
- “this corridor swap executed atomically and refund conditions were satisfied”
and turn them into portable proofs that:
- anyone can check with cheap, public verification, and
- can be standardized as commodities (canonical workloads, SLAs, prices).
19.1 What Layer 4 is (and isn’t)
Layer 4 is:
- The layer of circuits, proofs, and verification economics.
- The place where verification asymmetry is engineered and measured.
- The home of Proofs‑as‑a‑Library (PaL), proof factories, and canonical workload registries.
Layer 4 is not:
- A specific proof system (SNARK vs STARK vs something else); it assumes multi‑ZK.
- A single PoUW design; it supports several patterns as long as VerifyPrice and anti‑capture constraints are met.
- A truth oracle; it can attest to provenance and computation, not metaphysical correctness of content.
19.2 Verification asymmetry revisited
Recall the definition from Part I:
For a workload ,
- = cost (time, energy, hardware) to produce a result + proof.
- = cost to verify that result + proof.
- = verification asymmetry.
Layer 4’s goal is simple:
Make for the workloads that matter, and keep it that way in production.
Why this matters:
- If is small and stable, anyone (including small nodes) can check proofs.
- That makes proofs and verified FLOPs commodities: units of work that any counterparty can accept without trusting a platform.
Layer 4 introduces:
-
VerifyPrice(W): a public KPI vector for workload (W): [ \text{VerifyPrice}(W) = {\text{p50 time}, \text{p95 time}, \text{p50 cost}, \text{p95 cost}, \text{failure rate}, \text{hardware profile mix}} ]
-
r(W): published alongside VerifyPrice, making the asymmetry explicit.
Verification asymmetry is where the economic hinge sits: when proofs and verified FLOPs are cheap to check relative to production, they can be:
- priced,
- saved, and
- used as collateral,
without recreating a verification priesthood.
19.3 Canonical workloads and proof types
To avoid an unbounded zoo of bespoke proofs, Layer 4 maintains canonical workloads:
- MatMul(n, k, m; ε) – matrix multiplication at specified dimensions and error bounds.
- Inference(M, X; policy) – run model (M) on input (X) under constraints (e.g., norm bounds, clipping).
- Provenance(C, chain) – provenance chain for content type (C) (camera → editor → distributor).
- Settlement(S, policy) – settlement and refund logic for a corridor or shielded pool.
For each canonical workload, the stack defines:
- Circuits / arithmetization: how the workload is represented for proving systems.
- Proof schemas: which proof systems (SNARK/STARK/others) are supported and how to encode claims.
- SLO tiers: latency and reliability classes (e.g., “fast, less redundant” vs “slow, high redundancy”).
Canonical workloads matter economically:
- They become the SKUs for proof and compute markets.
- Work Credits are minted against units of these workloads, not arbitrary ad‑hoc tasks.
- VerifyPrice and r(W) are tracked per workload and tier.
19.3.1 Canonical workload definition template
For a workload to become a tradable SKU (and eligible for Work Credit issuance), it must be defined with sufficient precision. The following template ensures standardization:
Canonical Workload Definition Template
WorkloadID: Unique identifier (e.g.,
MATMUL_4096_FP32,INFER_LM_70B_256TOK,SETTLE_ATOMIC_BTC_ZEC)Statement: What is being proved (natural language + formal predicate)
- Example: “The matrix product was computed correctly, where , , and .”
Public Inputs:
- Commitment to , (hash or Merkle root)
- Claimed result commitment (hash of )
- Error bound
- Timestamp range
Private Inputs (Witness):
- Full matrices , ,
- Intermediate computation trace (if required by proof system)
What Is Verified:
- Correctness (computation matches claim)
- Bounds (result within specified limits)
- Freshness (timestamp within allowed window)
- Liveness (proof generated within epoch, not precomputed)
Verifier Complexity Class:
- Time: for succinct proofs; for non-succinct
- Memory: specified peak (e.g., ≤512MB for Laptop-Class)
- Proof size: max allowed (e.g., ≤1MB)
Policy Hooks: (machine-verifiable predicates)
- Hardware profile requirements (e.g., L0-B or higher)
- Prover stake requirements
- Membership/non-membership predicates (allowlist proofs, not graph inspection)
Allowed Hardware Profiles:
- L0-A: Yes (with issuance weight 0.9x)
- L0-B: Yes (1.0x)
- L0-C/D: Yes (1.1x)
SLA Tiers:
Tier Latency Target Redundancy Fee Multiplier Bronze p95 ≤ 60s 1x verification 1.0x Silver p95 ≤ 10s 2x verification 1.5x Gold p95 ≤ 2s 3x verification + audit 2.5x
Example: MATMUL_4096_FP32
| Field | Value |
|---|---|
| WorkloadID | MATMUL_4096_FP32 |
| Statement | Matrix multiply , dimensions 4096×4096, FP32, |
| Public Inputs | Hash(A), Hash(B), Hash(C), ε, timestamp |
| Private Inputs | A, B, C, intermediate products |
| Verified | Correctness ✓, Bounds ✓, Freshness ✓ |
| Verifier Complexity | O(log n), ≤256MB, ≤500KB proof |
| L0 Requirement | L0-A minimum; L0-B+ for Gold tier |
| VerifyPrice Target | ≤ 2s (Laptop-Class) |
Why this matters:
Without this template, “canonical workload” risks becoming “whatever the prover says it is.” With it, workloads are:
- Precisely specified: Anyone can implement a conforming prover/verifier.
- Auditable: Claims can be checked against the template.
- Tradable: Markets can price and exchange standardized units.
19.4 PoUW design patterns
Layer 4 doesn’t pick a single consensus recipe. It supports several proof‑of‑useful‑work patterns, as long as they satisfy:
- Open admission: commodity participants can join; no permissioned validator set.
- Unpredictable leader election: no one can cheaply bias the lottery.
- Useful work binding: you can’t precompute offline or reuse stale work.
- Proof quality & anti‑spam: junk proofs can’t flood the system without penalty.
Two main patterns, as sketched in the Introduction:
-
Hash‑gated useful work
-
Miners perform a cheap hash race (SHA‑family).
-
Crossing a threshold gives short‑lived eligibility to propose a block.
-
To actually produce a valid block, the miner must attach a PoUW artifact:
- e.g., MatMul proof, zk‑proof of inference, or a bundle of proofs.
-
The PoUW is seeded from header randomness to prevent precomputation.
Pros:
- Keeps the familiar PoW “lottery with a ticket.”
- Limits the number of concurrent PoUW contenders per block.
Cons:
- Ties useful‑work throughput to hash difficulty and block timing.
- Still burns hash energy as an admission fee.
-
-
Proof‑first selection
- Provers/miners race to produce useful‑work proofs and post them to a mempool.
- A lightweight mechanism (hash of proofs, VRF, or block proposer) selects which proofs get included and rewarded.
- Block headers incorporate proofs and entropy in ways that prevent grinding.
Pros:
- Decouples useful‑work throughput from hash difficulty.
- Lets markets prioritize high‑value workloads.
Cons:
- Needs strong anti‑cartel routing and spam controls.
- Careful design to avoid “whoever owns the fastest network dominates.”
Both patterns are compatible with:
- MatMul‑PoUW: turning matrix multiplications into consensus work.
- Inference‑PoUW: verified inference tasks.
- Hybrid schemes: where blockspace is partly PoUW and partly fee auction.
Layer 4 cares less about which pattern you choose and more about the properties of the resulting proofs:
- Are they cheap to verify (VerifyPrice)?
- Are they linked to hardware profiles (Layer 0)?
- Are they routed neutrally (no house‑only flows)?
- Are rewards and Work Credits tied mechanically to proofs, not admin decisions?
19.4.1 PoUW security property checklist
Critics will argue: “PoUW centralizes; network advantages dominate; markets get captured; ‘useful work’ becomes make-work.” To counter this, any PoUW implementation must satisfy the following testable properties:
PoUW Security Properties (Must Hold)
P1. Precomputation Resistance
- Proofs must be seeded by unpredictable randomness (block hash, VRF output) revealed after work begins.
- Freshness window: Proofs valid only for N blocks after seed revelation (e.g., N ≤ 10).
- Test: Can an attacker with 10x compute advantage stockpile valid proofs? If yes, property fails.
P2. Grinding Resistance
- Bound the advantage from trying many candidate headers/proofs.
- Grinding budget: Attacker with X% hashrate gains at most (1 + δ)·X% expected rewards, where δ ≤ 0.1.
- Mechanism: Commit-reveal schemes, VRF-based selection, difficulty adjustment.
- Test: Simulate grinding attacks; measure reward distribution skew.
P3. Network Advantage Bound
- Quantify acceptable propagation advantage.
- Target: p95 propagation delay ≤ 2s for proof announcements.
- Mitigation: Delayed selection windows (announce → wait → select), batch windows, multiple relays.
- Test: Measure geographic distribution of block proposers; flag if any region >40%.
P4. Spam Resistance
- Junk proofs cannot flood the system cheaply.
- Mechanism: Fee/deposit per proof submission; slashing for invalid proofs.
- Deposit floor: Deposit ≥ 10x expected verification cost.
- Slash rate: 100% of deposit for invalid proof; 50% for proof that fails to meet claimed SLA.
- Test: Can an attacker DoS the mempool for <$1000/hour? If yes, property fails.
P5. Cartel Detection
- Top-N share and entry latency are Sev-1 metrics.
- Thresholds:
- Top-1 prover share: flag if >20%
- Top-5 prover share: flag if >50%
- New prover entry latency (time from stake to first successful proof): target ≤7 days
- Response: If thresholds breached for 30 days, governance must propose remediation (fee adjustments, workload diversification, neutral routing rule changes).
P6. Useful Work Binding
- Work must be genuinely useful, not make-work.
- Verification: At least 10% of proofs are independently re-verified by random auditors.
- Economic signal: If demand for a workload class drops below threshold, it is deprecated from PoUW eligibility.
- Test: Is the same work being done outside the PoUW context? If no one would pay for it independently, it’s make-work.
Telemetry for PoUW Health:
| Metric | Target | Sev-1 Threshold |
|---|---|---|
| Top-1 prover share | <20% | >30% for 14 days |
| Top-5 prover share | <50% | >70% for 14 days |
| Geographic concentration | No region >30% | Any region >50% |
| Entry latency (new prover) | ≤7 days | >30 days |
| Invalid proof rate | <0.1% | >1% |
| Grinding advantage (δ) | ≤0.1 | >0.2 |
Monetary Consequences:
If PoUW properties degrade:
- WC minted under degraded PoUW may face issuance weight reduction (e.g., 0.8x if Top-5 share >60%).
- Provers with >20% share may face diminishing returns on additional work (progressive fee reduction).
- Workloads failing usefulness test are deprecated from WC eligibility.
This checklist makes PoUW security auditable, not aspirational.
19.5 Proof factories and PaL
Most developers don’t want to think about circuits and arithmetization; they want to say:
“Prove that this computation / provenance / settlement happened, then get me a receipt and pay whoever did the proving.”
Layer 4 provides this via:
- Proof factories: infrastructure clusters specialized in generating proofs for canonical workloads.
- Proofs‑as‑a‑Library (PaL): an SDK that compiles high‑level claims into proofs.
PaL exposes interfaces like:
prove_compute(f, inputs, policy)prove_provenance(asset_id, lineage)prove_settlement(tx, corridor_policy)
Under the hood, PaL:
- Maps the request to a canonical workload (W).
- Selects suitable proof systems and hardware profiles (respecting caller constraints like “open hardware only”).
- Submits the job via neutral routers to proof factories or PoUW miners.
- Receives back a PIDL receipt plus a proof artifact.
- Returns this to the caller and broadcasts it where needed (e.g., for Work Credit issuance or contract settlement).
For the builder, Layer 4 looks like “call PaL, get proof+receipt.” For the stack, it is the glue between:
- application logic (Layer 2/3),
- hardware (Layer 0), and
- economic machinery (Work Credits, VerifyPrice).
19.6 VerifyPrice observatory
Verification asymmetry is a design goal; VerifyPrice turns it into a dashboard.
The VerifyPrice observatory is Layer‑4’s telemetry role:
-
It continuously measures VerifyPrice(W) across:
- hardware profiles,
- client implementations,
- geographic regions.
-
It publishes:
- p50/p95 verify times,
- estimated energy per verification,
- failure rates and mismatch rates,
- diversity metrics (how many independent verifiers are actually checking).
-
It flags:
- regressions (“v(W) creeping up; client bloat or circuit changes”),
- centralization (“90% of verifications now on hardware profile Hx in country C”).
This has three effects:
-
Economic: pricing. Markets can price proofs and verified FLOPs based on real verification costs, not marketing.
-
Security: decentralization. If verification concentrates, the stack knows it is drifting back towards platform trust.
-
Governance: change control. Any protocol change that affects circuits or proof systems must preserve VerifyPrice SLOs, or explicitly argue for a shift.
19.6.1 VerifyPrice measurement specification
VerifyPrice is now a monetary KPI—it determines whether Work Credits retain their value proposition. That requires a rigorous measurement harness, not just a dashboard slogan.
Reference Verifier Classes:
| Class | Hardware Spec | Use Case |
|---|---|---|
| Laptop-Class | 4-core CPU (≥2.5 GHz), 16GB RAM, SSD, no GPU | Baseline for “anyone can verify” claim; default for SLOs |
| Mobile-Class | ARM SoC (A15/Snapdragon 8-class), 8GB RAM, flash storage | Edge verification; critical for IoT and mobile wallets |
| Server-Class | 32-core CPU, 128GB RAM, NVMe, optional GPU | High-throughput verification; proof factories and exchanges |
Cost Vector Definition:
For each workload and verifier class , VerifyPrice is a 5-tuple:
Where:
- : median and 95th-percentile verification time (seconds)
- : energy per verification (Joules, measured via hardware counters or watt-meter)
- : peak memory (MB)
- : failure rate (timeouts, invalid proof rejections, crashes)
USD cost is derived: , where is the reference energy price (USD 0.10/kWh) and is opportunity cost (USD 0.001/s), both published quarterly.
Verifier Implementations:
| Requirement | Rationale |
|---|---|
| ≥2 independent implementations per workload tier | Prevents single-implementation bugs from corrupting measurements |
| Open-source, reproducible builds | Anyone can audit and rebuild |
| Deterministic output | Same proof → same result, always |
| Versioned and tagged | Measurements tied to specific verifier version |
Sampling Methodology:
- Random selection: Proofs selected uniformly at random from recent submissions (not cherry-picked).
- Stratified by region/profile: Measurements cover ≥10 regions and ≥3 hardware profiles per workload.
- Adversarial corpus: 10% of test proofs are malformed or worst-case (max witness size, pathological inputs) to measure failure handling.
- Continuous measurement: Not periodic snapshots; rolling 24-hour windows published hourly.
Adversarial Conditions:
| Condition | How Tested |
|---|---|
| Network latency | 200ms RTT injected for proof fetch |
| Packet loss | 5% random packet loss during verification |
| Worst-case proof size | Max allowed witness for workload class |
| Malformed proofs | 10% of test corpus intentionally invalid |
| Resource exhaustion | Verification under 80% memory pressure |
Reproducibility Standard:
- Benchmark harness: Open-source, versioned, deterministic. Anyone can run the same tests.
- Signed results: Each measurement batch is signed by ≥2 independent measurement operators.
- Divergence alerts: If independent operators diverge by >5%, investigation triggered.
- Archived raw data: All proof samples and timing logs archived for 1 year.
Why This Matters:
Without this spec, “VerifyPrice observatory” is an oracle claim. With it, VerifyPrice becomes reproducible consensus—any skeptic can run the harness, check the measurements, and falsify the dashboard if it’s wrong.
19.7 VerifyPrice in practice (targets, regressions, and what “good” looks like)
At the North-Star level, VerifyPrice is how we answer a very simple allocator question:
“If I hold this asset for a cycle, does one unit still buy at least as much verification as it used to?”
We care about VerifyPrice in three dimensions, designed to avoid circular reasoning:
19.7.1 Physical VerifyPrice SLOs (Constitutional)
These are the non-negotiable targets, measured in real resources on reference hardware:
| Workload Class | Metric | Target (Laptop-Class) | Sev-1 Threshold |
|---|---|---|---|
| PROOF_2^20 | ≤5s | >10s for 7 days | |
| PROOF_2^20 | ≤50J | >100J sustained | |
| MATMUL_4096 | ≤2s | >5s for 7 days | |
| INFER_LM_70B_256TOK | ≤30s | >60s for 7 days | |
| SETTLE_CORRIDOR | ≤10s | >30s for 7 days | |
| All | ≤0.1% | >1% for 7 days |
Why physical targets matter: They are exogenous to token price. Whether the token rallies or dumps, these targets must hold. If verification takes 30s on a laptop, the “anyone can verify” promise is broken regardless of what the market says.
19.7.2 Protocol Affordability SLOs (Operational)
These measure whether verification remains affordable as a fraction of typical transaction costs:
| Target | Definition |
|---|---|
| AffordabilityRatio ≤ 5% | Verification cost should be ≤5% of the median fee for that workload class |
| VerificationBudget ≤ 1% of tx value | For settlement workloads, verification cost ≤1% of median transaction value |
Why affordability matters: Even if physical VerifyPrice is low, if fees are set such that verification consumes most of the transaction cost, the economics break.
19.7.3 Token-Quoted VerifyPrice (Market Signal, Not Target)
Token-quoted VerifyPrice (verification cost in WC-Base) is a useful market signal, but not a constitutional target:
This measures: “How many verifications can one token purchase at the current fee schedule?”
Why this is not a target: Token price is endogenous. If the token rallies, token-quoted costs naturally fall; if it dumps, they rise. Making this a target creates circular reasoning.
What it signals:
- Rising VerifyPower → token buys more verification → healthy
- Falling VerifyPower → either physical VerifyPrice degraded (bad) or fee schedule increased (investigate)
The protocol commits to physical SLOs and affordability ratios. Token-quoted metrics are dashboards for market participants, not governance constraints.
Summary of SLO Hierarchy:
| Level | What It Measures | Who Enforces | Consequence of Breach |
|---|---|---|---|
| Physical (Constitutional) | Real-world verification cost | Protocol governance | Sev-1; remediation required |
| Affordability (Operational) | Verification as % of fees | Fee policy | Fee schedule review |
| Token-quoted (Market) | Purchasing power signal | Market participants | Informational only |
This hierarchy ensures that VerifyPrice targets are falsifiable and non-circular.
Operationally, this turns VerifyPrice into a Sev-1 metric:
-
Sev-1 conditions for a canonical workload W (e.g., PROOF_2^20, MATMUL_4096, INFER_LM_70B_256TOK):
p95_t(W) > 10son reference hardware for more than a rolling window (say 7–30 days), orp95_c(W) > $0.01per verification on reference hardware without an external energy shock to explain it, orr(W)(verify/produce overhead) trending upward over multiple releases, orfail(W)(invalid/malformed/timeouts) trending upward beyond a small, explicit budget (e.g., 0.1% for proofs).
Any of these is a governance-relevant regression:
- The lab and protocol must ship a remediation plan (circuit slimming, client optimization, proof-system tuning, or hardware-profile diversification).
- VerifyPrice dashboards must show both the regression and the recovery path; “we’re working on it” without numbers does not count.
In practice, VerifyPrice should be published per canonical workload in a small, consistent format – for example:
- Percentiles:
p50_verify_time_sec,p95_verify_time_sec,p50_cost_usd,p95_cost_usd. - Quality:
failure_rate, plus adversarial-mix outcomes (e.g., how many bad artifacts were caught under a Byzantine stress test). - Context: optional
queue_depthandavg_wait_secso users can distinguish protocol cost from congestion.
Example (mock):
PROOF_2^20verifies in0.4sp50 /0.9sp95 at$0.0003cost per check,failure_rate = 0.0004→ status: healthy.
MATMUL_4096in a PoUW pilot showsr(W) = 0.20(verify/produce ≈ 1:5), with0.3sp95 verify time and low variance → status: healthy.
Why this KPI matters economically
VerifyPrice is not just a performance metric; it is the economic hinge of the triad:
- Fraud doesn’t pay. When
r(W) ≪ 1, cheating is negative-carry: it costs more to fake convincing work than to do it honestly and prove it. Dishonesty becomes a bad trade. - Commodities emerge. Standard units (proofs, verified FLOPs, privacy capacity) can be priced as SLAs because anyone can check them cheaply. We stop buying “trust in a platform” and start buying receipts.
- Subsidies align. Block rewards and fee markets underwrite capacities the world actually wants (proofs, verified compute, private settlement), not heat whose only purpose is to certify difficulty.
Imagine the circuit:
- A creator or service emits a claim – this video came from this camera; this model ran this computation; this account owns this asset.
- A decentralized market of provers competes to stamp that claim with succinct assurance, subject to public VerifyPrice SLOs.
- Payment settles privately across neutral rails that no gatekeeper can seize, leaving behind settlement receipts.
- Each hop (claim, proof, settlement) is verifiable by any observer without privileged access.
Nothing miraculous happens. The base layer’s default simply shifts from “believe and hope” to “verify and move on.” With that default in place:
- contracts rely on artifacts, not relationships;
- risk migrates from counterparty to computation;
- and the “sacred” migrates into the protocol layer—not as mysticism, but as the quiet dignity of systems that do not and cannot lie.
In practical terms, verification asymmetry is what makes these systems economically sustainable: when checking work is cheaper than faking it, markets can price integrity, automation can scale without trust, and networks can reward real contribution instead of wasted energy.
The same logic will reappear in VerifyReach and VerifySettle for comms and settlement: if those KPIs drift, neutrality and repression-resilience fail in exactly the same way that rising VerifyPrice breaks the commodity story for proofs and compute.
19.8 Layer‑4 stress tests
Layer 4 passes its SoV audition if it survives several adversarial scenarios:
-
Circuit bloat.
- Can we detect when proofs become too expensive to verify?
- Is there a migration path to leaner circuits or alternative arithmetizations?
-
Prover cartel.
- Can neutral routers and SLA/slashing mechanisms prevent a small set of proof factories from monopolizing high‑value workloads?
- Do we have metrics on top‑N prover share and entry latency?
-
Proof system break / new attack.
- Can we deprecate a proof system, rotate to alternatives, and quarantine affected Work Credits?
- Are proofs tagged with enough metadata (system, parameters) to enable selective invalidation or discounts?
If Layer 4 remains cheap to verify, open to participate, and instrumented enough to handle change, then Proofs and Compute deservedly move closer to “monetary primitive” rather than “platform feature.”
19.9 Minimum viable Layer-4 economy
Part IV should not remain agnostic about where the money lives. This section commits to a reference design for Layer 4’s economic machinery—the minimum that must work for the SoV claim to hold.
19.9.1 Canonical workload starter set
The following workloads are the launch SKUs—the minimum set required for a functioning proof economy:
| WorkloadID | Description | VerifyPrice Target (Laptop) | WC Eligibility |
|---|---|---|---|
PROOF_2^20 | Generic ZK proof, 2^20 constraints | ≤ 5s | Tier A (full) |
PROOF_2^24 | Large ZK proof, 2^24 constraints | ≤ 30s | Tier A (full) |
MATMUL_4096_FP32 | Matrix multiply 4096×4096, FP32 | ≤ 2s | Tier A (full) |
INFER_LM_7B_512TOK | 7B parameter LM inference, 512 tokens | ≤ 10s | Tier B (discounted) |
INFER_LM_70B_256TOK | 70B parameter LM inference, 256 tokens | ≤ 30s | Tier B (discounted) |
PROVENANCE_MEDIA | Media provenance chain verification | ≤ 3s | Tier A (full) |
SETTLE_ATOMIC | Atomic swap settlement proof | ≤ 5s | Tier A (full) |
New workloads can be added via governance proposal if they meet the canonical workload template requirements (§19.3.1).
19.9.2 Tier rules for Work Credit issuance
Not all verified work is created equal. Work Credits are issued according to tier rules that price trust and verification quality:
| Tier | Verification Type | WC Issuance Weight | Collateral Grade |
|---|---|---|---|
| Tier A | Full cryptographic proof (ZK-SNARK/STARK) | 1.0x | Pristine |
| Tier B | Probabilistic verification (audited transcripts, error bounds) | 0.6x | Standard |
| Tier C | Attestation-backed (TEE + sampling) | 0.3x | Discounted |
Rules:
- Tier A only for high-assurance workloads (settlement, financial proofs).
- Tier B acceptable for inference and compute where full ZK is impractical; must have explicit error bounds (e.g., “99% confidence, 0.1% false positive rate”).
- Tier C for experimental or low-stakes workloads; not eligible as collateral in DeFi integrations.
19.9.3 Fee + burn + slashing logic
The economic loop that connects work to value:
┌─────────────────────────────────────────────────────────────────┐
│ LAYER 4 ECONOMIC LOOP │
├─────────────────────────────────────────────────────────────────┤
│ DEMAND │
│ │ │
│ ▼ │
│ User requests workload W ──► Pays fee F(W) in WC-Base │
│ │ │
│ ▼ │
│ Fee split: │
│ ├── 70% to prover (reward) │
│ ├── 20% burned (supply reduction) │
│ └── 10% to protocol treasury (security budget) │
│ │ │
│ ▼ │
│ Prover stakes collateral C(W) ──► If invalid proof: slash 100% │
│ │ │
│ ▼ │
│ Valid proof accepted ──► WC-Base minted to prover (issuance) │
│ │ │
│ ▼ │
│ Net effect: demand burns + mints; high demand → net burn │
└─────────────────────────────────────────────────────────────────┘
Fee formula (reference):
Where:
- BaseFee(W) is set per workload class, adjusted quarterly based on VerifyPrice trends.
- CongestionMultiplier rises with mempool depth (EIP-1559 style).
- SLATierMultiplier: Bronze 1.0x, Silver 1.5x, Gold 2.5x.
Issuance cap:
- Total new WC issuance per epoch is capped (e.g., halving schedule or capacity-linked).
- If demand exceeds cap, issuance is pro-rated; excess fees are fully burned.
19.9.4 Neutral routing rule
To prevent “house-only flows” (proof factories routing work only to themselves):
- House-share cap: No single entity may receive >20% of routed work in any epoch.
- Enforcement: Neutral routers must implement round-robin or weighted-random selection across qualified provers.
- Telemetry: House-share per router published hourly; breaches are Sev-1.
- Slashing: Routers violating house-share cap lose routing eligibility for 30 days.
19.9.5 VerifyPrice harness requirement
Before any workload is added to the canonical set:
- Reference verifier implementations (≥2) must exist.
- VerifyPrice baseline must be established via measurement harness (§19.6.1).
- SLO targets must be published and achievable on Laptop-Class hardware.
No workload without a VerifyPrice baseline can mint WC-Base.
Why this matters:
This section converts Part IV from “architecture” into “auditable mechanism.” A skeptic can now ask:
- “What workloads count?” → Starter set + template for additions.
- “How are credits issued?” → Tier rules with explicit weights.
- “Where does value accrue?” → Fee split + burn + collateral requirements.
- “How is capture prevented?” → Neutral routing + house-share cap.
If any of these mechanisms fail (fees don’t burn, tiers collapse, routers capture), the SoV claim is falsified.
Tip: hover a heading to reveal its permalink symbol for copying.