Next‑Gen SoV

§19. Layer 4 – Truth & Work: Proof Systems and PoUW

v1.0
Cite this section

Copy/paste (plain text):

Jason St George. "§19. Layer 4 – Truth & Work: Proof Systems and PoUW" in Next‑Gen Store of Value: Privacy, Proofs, Compute. Version v1.0. /v/1.0/read/part-iv/19-layer-4/

§19. Layer 4 – Truth & Work: Proof Systems and PoUW

Layer 4 is the Prove/Verify spine of the stack.

Its job is to take arbitrary claims:

  • “this media object came from camera C at time T”
  • “this inference was computed by model M on input X with bounds B”
  • “this corridor swap executed atomically and refund conditions were satisfied”

and turn them into portable proofs that:

  • anyone can check with cheap, public verification, and
  • can be standardized as commodities (canonical workloads, SLAs, prices).

19.1 What Layer 4 is (and isn’t)

Layer 4 is:

  • The layer of circuits, proofs, and verification economics.
  • The place where verification asymmetry is engineered and measured.
  • The home of Proofs‑as‑a‑Library (PaL), proof factories, and canonical workload registries.

Layer 4 is not:

  • A specific proof system (SNARK vs STARK vs something else); it assumes multi‑ZK.
  • A single PoUW design; it supports several patterns as long as VerifyPrice and anti‑capture constraints are met.
  • A truth oracle; it can attest to provenance and computation, not metaphysical correctness of content.

19.2 Verification asymmetry revisited

Recall the definition from Part I:

For a workload WW,

  • p(W)p(W) = cost (time, energy, hardware) to produce a result + proof.
  • v(W)v(W) = cost to verify that result + proof.
  • r(W)=v(W)p(W)r(W) = \frac{v(W)}{p(W)} = verification asymmetry.

Layer 4’s goal is simple:

Make r(W)1r(W) \ll 1 for the workloads that matter, and keep it that way in production.

Why this matters:

  • If v(W)v(W) is small and stable, anyone (including small nodes) can check proofs.
  • That makes proofs and verified FLOPs commodities: units of work that any counterparty can accept without trusting a platform.

Layer 4 introduces:

  • VerifyPrice(W): a public KPI vector for workload (W): [ \text{VerifyPrice}(W) = {\text{p50 time}, \text{p95 time}, \text{p50 cost}, \text{p95 cost}, \text{failure rate}, \text{hardware profile mix}} ]

  • r(W): published alongside VerifyPrice, making the asymmetry explicit.

Verification asymmetry is where the economic hinge sits: when proofs and verified FLOPs are cheap to check relative to production, they can be:

  • priced,
  • saved, and
  • used as collateral,

without recreating a verification priesthood.

19.3 Canonical workloads and proof types

To avoid an unbounded zoo of bespoke proofs, Layer 4 maintains canonical workloads:

  • MatMul(n, k, m; ε) – matrix multiplication at specified dimensions and error bounds.
  • Inference(M, X; policy) – run model (M) on input (X) under constraints (e.g., norm bounds, clipping).
  • Provenance(C, chain) – provenance chain for content type (C) (camera → editor → distributor).
  • Settlement(S, policy) – settlement and refund logic for a corridor or shielded pool.

For each canonical workload, the stack defines:

  • Circuits / arithmetization: how the workload is represented for proving systems.
  • Proof schemas: which proof systems (SNARK/STARK/others) are supported and how to encode claims.
  • SLO tiers: latency and reliability classes (e.g., “fast, less redundant” vs “slow, high redundancy”).

Canonical workloads matter economically:

  • They become the SKUs for proof and compute markets.
  • Work Credits are minted against units of these workloads, not arbitrary ad‑hoc tasks.
  • VerifyPrice and r(W) are tracked per workload and tier.
19.3.1 Canonical workload definition template

For a workload to become a tradable SKU (and eligible for Work Credit issuance), it must be defined with sufficient precision. The following template ensures standardization:


Canonical Workload Definition Template

WorkloadID: Unique identifier (e.g., MATMUL_4096_FP32, INFER_LM_70B_256TOK, SETTLE_ATOMIC_BTC_ZEC)

Statement: What is being proved (natural language + formal predicate)

  • Example: “The matrix product C=A×BC = A \times B was computed correctly, where AR4096×4096A \in \mathbb{R}^{4096 \times 4096}, BR4096×4096B \in \mathbb{R}^{4096 \times 4096}, and CABε\|C - A \cdot B\|_\infty \leq \varepsilon.”

Public Inputs:

  • Commitment to AA, BB (hash or Merkle root)
  • Claimed result commitment (hash of CC)
  • Error bound ε\varepsilon
  • Timestamp range

Private Inputs (Witness):

  • Full matrices AA, BB, CC
  • Intermediate computation trace (if required by proof system)

What Is Verified:

  • Correctness (computation matches claim)
  • Bounds (result within specified limits)
  • Freshness (timestamp within allowed window)
  • Liveness (proof generated within epoch, not precomputed)

Verifier Complexity Class:

  • Time: O(logn)O(\log n) for succinct proofs; O(n)O(n) for non-succinct
  • Memory: specified peak (e.g., ≤512MB for Laptop-Class)
  • Proof size: max allowed (e.g., ≤1MB)

Policy Hooks: (machine-verifiable predicates)

  • Hardware profile requirements (e.g., L0-B or higher)
  • Prover stake requirements
  • Membership/non-membership predicates (allowlist proofs, not graph inspection)

Allowed Hardware Profiles:

  • L0-A: Yes (with issuance weight 0.9x)
  • L0-B: Yes (1.0x)
  • L0-C/D: Yes (1.1x)

SLA Tiers:

TierLatency TargetRedundancyFee Multiplier
Bronzep95 ≤ 60s1x verification1.0x
Silverp95 ≤ 10s2x verification1.5x
Goldp95 ≤ 2s3x verification + audit2.5x

Example: MATMUL_4096_FP32

FieldValue
WorkloadIDMATMUL_4096_FP32
StatementMatrix multiply C=A×BC = A \times B, dimensions 4096×4096, FP32, CAB105\|C - A \cdot B\|_\infty \leq 10^{-5}
Public InputsHash(A), Hash(B), Hash(C), ε, timestamp
Private InputsA, B, C, intermediate products
VerifiedCorrectness ✓, Bounds ✓, Freshness ✓
Verifier ComplexityO(log n), ≤256MB, ≤500KB proof
L0 RequirementL0-A minimum; L0-B+ for Gold tier
VerifyPrice Targett95t_{95} ≤ 2s (Laptop-Class)

Why this matters:

Without this template, “canonical workload” risks becoming “whatever the prover says it is.” With it, workloads are:

  • Precisely specified: Anyone can implement a conforming prover/verifier.
  • Auditable: Claims can be checked against the template.
  • Tradable: Markets can price and exchange standardized units.

19.4 PoUW design patterns

Layer 4 doesn’t pick a single consensus recipe. It supports several proof‑of‑useful‑work patterns, as long as they satisfy:

  • Open admission: commodity participants can join; no permissioned validator set.
  • Unpredictable leader election: no one can cheaply bias the lottery.
  • Useful work binding: you can’t precompute offline or reuse stale work.
  • Proof quality & anti‑spam: junk proofs can’t flood the system without penalty.

Two main patterns, as sketched in the Introduction:

  1. Hash‑gated useful work

    • Miners perform a cheap hash race (SHA‑family).

    • Crossing a threshold gives short‑lived eligibility to propose a block.

    • To actually produce a valid block, the miner must attach a PoUW artifact:

      • e.g., MatMul proof, zk‑proof of inference, or a bundle of proofs.
    • The PoUW is seeded from header randomness to prevent precomputation.

    Pros:

    • Keeps the familiar PoW “lottery with a ticket.”
    • Limits the number of concurrent PoUW contenders per block.

    Cons:

    • Ties useful‑work throughput to hash difficulty and block timing.
    • Still burns hash energy as an admission fee.
  2. Proof‑first selection

    • Provers/miners race to produce useful‑work proofs and post them to a mempool.
    • A lightweight mechanism (hash of proofs, VRF, or block proposer) selects which proofs get included and rewarded.
    • Block headers incorporate proofs and entropy in ways that prevent grinding.

    Pros:

    • Decouples useful‑work throughput from hash difficulty.
    • Lets markets prioritize high‑value workloads.

    Cons:

    • Needs strong anti‑cartel routing and spam controls.
    • Careful design to avoid “whoever owns the fastest network dominates.”

Both patterns are compatible with:

  • MatMul‑PoUW: turning matrix multiplications into consensus work.
  • Inference‑PoUW: verified inference tasks.
  • Hybrid schemes: where blockspace is partly PoUW and partly fee auction.

Layer 4 cares less about which pattern you choose and more about the properties of the resulting proofs:

  • Are they cheap to verify (VerifyPrice)?
  • Are they linked to hardware profiles (Layer 0)?
  • Are they routed neutrally (no house‑only flows)?
  • Are rewards and Work Credits tied mechanically to proofs, not admin decisions?
19.4.1 PoUW security property checklist

Critics will argue: “PoUW centralizes; network advantages dominate; markets get captured; ‘useful work’ becomes make-work.” To counter this, any PoUW implementation must satisfy the following testable properties:


PoUW Security Properties (Must Hold)

P1. Precomputation Resistance

  • Proofs must be seeded by unpredictable randomness (block hash, VRF output) revealed after work begins.
  • Freshness window: Proofs valid only for N blocks after seed revelation (e.g., N ≤ 10).
  • Test: Can an attacker with 10x compute advantage stockpile valid proofs? If yes, property fails.

P2. Grinding Resistance

  • Bound the advantage from trying many candidate headers/proofs.
  • Grinding budget: Attacker with X% hashrate gains at most (1 + δ)·X% expected rewards, where δ ≤ 0.1.
  • Mechanism: Commit-reveal schemes, VRF-based selection, difficulty adjustment.
  • Test: Simulate grinding attacks; measure reward distribution skew.

P3. Network Advantage Bound

  • Quantify acceptable propagation advantage.
  • Target: p95 propagation delay ≤ 2s for proof announcements.
  • Mitigation: Delayed selection windows (announce → wait → select), batch windows, multiple relays.
  • Test: Measure geographic distribution of block proposers; flag if any region >40%.

P4. Spam Resistance

  • Junk proofs cannot flood the system cheaply.
  • Mechanism: Fee/deposit per proof submission; slashing for invalid proofs.
  • Deposit floor: Deposit ≥ 10x expected verification cost.
  • Slash rate: 100% of deposit for invalid proof; 50% for proof that fails to meet claimed SLA.
  • Test: Can an attacker DoS the mempool for <$1000/hour? If yes, property fails.

P5. Cartel Detection

  • Top-N share and entry latency are Sev-1 metrics.
  • Thresholds:
    • Top-1 prover share: flag if >20%
    • Top-5 prover share: flag if >50%
    • New prover entry latency (time from stake to first successful proof): target ≤7 days
  • Response: If thresholds breached for 30 days, governance must propose remediation (fee adjustments, workload diversification, neutral routing rule changes).

P6. Useful Work Binding

  • Work must be genuinely useful, not make-work.
  • Verification: At least 10% of proofs are independently re-verified by random auditors.
  • Economic signal: If demand for a workload class drops below threshold, it is deprecated from PoUW eligibility.
  • Test: Is the same work being done outside the PoUW context? If no one would pay for it independently, it’s make-work.

Telemetry for PoUW Health:

MetricTargetSev-1 Threshold
Top-1 prover share<20%>30% for 14 days
Top-5 prover share<50%>70% for 14 days
Geographic concentrationNo region >30%Any region >50%
Entry latency (new prover)≤7 days>30 days
Invalid proof rate<0.1%>1%
Grinding advantage (δ)≤0.1>0.2

Monetary Consequences:

If PoUW properties degrade:

  • WC minted under degraded PoUW may face issuance weight reduction (e.g., 0.8x if Top-5 share >60%).
  • Provers with >20% share may face diminishing returns on additional work (progressive fee reduction).
  • Workloads failing usefulness test are deprecated from WC eligibility.

This checklist makes PoUW security auditable, not aspirational.

19.5 Proof factories and PaL

Most developers don’t want to think about circuits and arithmetization; they want to say:

“Prove that this computation / provenance / settlement happened, then get me a receipt and pay whoever did the proving.”

Layer 4 provides this via:

  • Proof factories: infrastructure clusters specialized in generating proofs for canonical workloads.
  • Proofs‑as‑a‑Library (PaL): an SDK that compiles high‑level claims into proofs.

PaL exposes interfaces like:

  • prove_compute(f, inputs, policy)
  • prove_provenance(asset_id, lineage)
  • prove_settlement(tx, corridor_policy)

Under the hood, PaL:

  1. Maps the request to a canonical workload (W).
  2. Selects suitable proof systems and hardware profiles (respecting caller constraints like “open hardware only”).
  3. Submits the job via neutral routers to proof factories or PoUW miners.
  4. Receives back a PIDL receipt plus a proof artifact.
  5. Returns this to the caller and broadcasts it where needed (e.g., for Work Credit issuance or contract settlement).

For the builder, Layer 4 looks like “call PaL, get proof+receipt.” For the stack, it is the glue between:

  • application logic (Layer 2/3),
  • hardware (Layer 0), and
  • economic machinery (Work Credits, VerifyPrice).

19.6 VerifyPrice observatory

Verification asymmetry is a design goal; VerifyPrice turns it into a dashboard.

The VerifyPrice observatory is Layer‑4’s telemetry role:

  • It continuously measures VerifyPrice(W) across:

    • hardware profiles,
    • client implementations,
    • geographic regions.
  • It publishes:

    • p50/p95 verify times,
    • estimated energy per verification,
    • failure rates and mismatch rates,
    • diversity metrics (how many independent verifiers are actually checking).
  • It flags:

    • regressions (“v(W) creeping up; client bloat or circuit changes”),
    • centralization (“90% of verifications now on hardware profile Hx in country C”).

This has three effects:

  1. Economic: pricing. Markets can price proofs and verified FLOPs based on real verification costs, not marketing.

  2. Security: decentralization. If verification concentrates, the stack knows it is drifting back towards platform trust.

  3. Governance: change control. Any protocol change that affects circuits or proof systems must preserve VerifyPrice SLOs, or explicitly argue for a shift.

19.6.1 VerifyPrice measurement specification

VerifyPrice is now a monetary KPI—it determines whether Work Credits retain their value proposition. That requires a rigorous measurement harness, not just a dashboard slogan.

Reference Verifier Classes:

ClassHardware SpecUse Case
Laptop-Class4-core CPU (≥2.5 GHz), 16GB RAM, SSD, no GPUBaseline for “anyone can verify” claim; default for SLOs
Mobile-ClassARM SoC (A15/Snapdragon 8-class), 8GB RAM, flash storageEdge verification; critical for IoT and mobile wallets
Server-Class32-core CPU, 128GB RAM, NVMe, optional GPUHigh-throughput verification; proof factories and exchanges

Cost Vector Definition:

For each workload WW and verifier class VV, VerifyPrice is a 5-tuple:

VerifyPrice(W,V)=(t50,t95,e,m,f)\text{VerifyPrice}(W, V) = (t_{50}, t_{95}, e, m, f)

Where:

  • t50,t95t_{50}, t_{95}: median and 95th-percentile verification time (seconds)
  • ee: energy per verification (Joules, measured via hardware counters or watt-meter)
  • mm: peak memory (MB)
  • ff: failure rate (timeouts, invalid proof rejections, crashes)

USD cost is derived: c=e×penergy+t95×poppc = e \times p_{\text{energy}} + t_{95} \times p_{\text{opp}}, where penergyp_{\text{energy}} is the reference energy price (USD 0.10/kWh) and poppp_{\text{opp}} is opportunity cost (USD 0.001/s), both published quarterly.

Verifier Implementations:

RequirementRationale
≥2 independent implementations per workload tierPrevents single-implementation bugs from corrupting measurements
Open-source, reproducible buildsAnyone can audit and rebuild
Deterministic outputSame proof → same result, always
Versioned and taggedMeasurements tied to specific verifier version

Sampling Methodology:

  • Random selection: Proofs selected uniformly at random from recent submissions (not cherry-picked).
  • Stratified by region/profile: Measurements cover ≥10 regions and ≥3 hardware profiles per workload.
  • Adversarial corpus: 10% of test proofs are malformed or worst-case (max witness size, pathological inputs) to measure failure handling.
  • Continuous measurement: Not periodic snapshots; rolling 24-hour windows published hourly.

Adversarial Conditions:

ConditionHow Tested
Network latency200ms RTT injected for proof fetch
Packet loss5% random packet loss during verification
Worst-case proof sizeMax allowed witness for workload class
Malformed proofs10% of test corpus intentionally invalid
Resource exhaustionVerification under 80% memory pressure

Reproducibility Standard:

  • Benchmark harness: Open-source, versioned, deterministic. Anyone can run the same tests.
  • Signed results: Each measurement batch is signed by ≥2 independent measurement operators.
  • Divergence alerts: If independent operators diverge by >5%, investigation triggered.
  • Archived raw data: All proof samples and timing logs archived for 1 year.

Why This Matters:

Without this spec, “VerifyPrice observatory” is an oracle claim. With it, VerifyPrice becomes reproducible consensus—any skeptic can run the harness, check the measurements, and falsify the dashboard if it’s wrong.

19.7 VerifyPrice in practice (targets, regressions, and what “good” looks like)

At the North-Star level, VerifyPrice is how we answer a very simple allocator question:

“If I hold this asset for a cycle, does one unit still buy at least as much verification as it used to?”

We care about VerifyPrice in three dimensions, designed to avoid circular reasoning:

19.7.1 Physical VerifyPrice SLOs (Constitutional)

These are the non-negotiable targets, measured in real resources on reference hardware:

Workload ClassMetricTarget (Laptop-Class)Sev-1 Threshold
PROOF_2^20t95t_{95}≤5s>10s for 7 days
PROOF_2^20ee≤50J>100J sustained
MATMUL_4096t95t_{95}≤2s>5s for 7 days
INFER_LM_70B_256TOKt95t_{95}≤30s>60s for 7 days
SETTLE_CORRIDORt95t_{95}≤10s>30s for 7 days
Allff≤0.1%>1% for 7 days

Why physical targets matter: They are exogenous to token price. Whether the token rallies or dumps, these targets must hold. If verification takes 30s on a laptop, the “anyone can verify” promise is broken regardless of what the market says.

19.7.2 Protocol Affordability SLOs (Operational)

These measure whether verification remains affordable as a fraction of typical transaction costs:

AffordabilityRatio(W)=VerifyCost(W)MedianFee(W)\text{AffordabilityRatio}(W) = \frac{\text{VerifyCost}(W)}{\text{MedianFee}(W)}

TargetDefinition
AffordabilityRatio ≤ 5%Verification cost should be ≤5% of the median fee for that workload class
VerificationBudget ≤ 1% of tx valueFor settlement workloads, verification cost ≤1% of median transaction value

Why affordability matters: Even if physical VerifyPrice is low, if fees are set such that verification consumes most of the transaction cost, the economics break.

19.7.3 Token-Quoted VerifyPrice (Market Signal, Not Target)

Token-quoted VerifyPrice (verification cost in WC-Base) is a useful market signal, but not a constitutional target:

VerifyPower(token)=1VerifyPrice(W)×FeeSchedule(W)\text{VerifyPower}(\text{token}) = \frac{1}{\text{VerifyPrice}(W) \times \text{FeeSchedule}(W)}

This measures: “How many verifications can one token purchase at the current fee schedule?”

Why this is not a target: Token price is endogenous. If the token rallies, token-quoted costs naturally fall; if it dumps, they rise. Making this a target creates circular reasoning.

What it signals:

  • Rising VerifyPower → token buys more verification → healthy
  • Falling VerifyPower → either physical VerifyPrice degraded (bad) or fee schedule increased (investigate)

The protocol commits to physical SLOs and affordability ratios. Token-quoted metrics are dashboards for market participants, not governance constraints.


Summary of SLO Hierarchy:

LevelWhat It MeasuresWho EnforcesConsequence of Breach
Physical (Constitutional)Real-world verification costProtocol governanceSev-1; remediation required
Affordability (Operational)Verification as % of feesFee policyFee schedule review
Token-quoted (Market)Purchasing power signalMarket participantsInformational only

This hierarchy ensures that VerifyPrice targets are falsifiable and non-circular.

Operationally, this turns VerifyPrice into a Sev-1 metric:

  • Sev-1 conditions for a canonical workload W (e.g., PROOF_2^20, MATMUL_4096, INFER_LM_70B_256TOK):

    • p95_t(W) > 10s on reference hardware for more than a rolling window (say 7–30 days), or
    • p95_c(W) > $0.01 per verification on reference hardware without an external energy shock to explain it, or
    • r(W) (verify/produce overhead) trending upward over multiple releases, or
    • fail(W) (invalid/malformed/timeouts) trending upward beyond a small, explicit budget (e.g., 0.1% for proofs).

Any of these is a governance-relevant regression:

  • The lab and protocol must ship a remediation plan (circuit slimming, client optimization, proof-system tuning, or hardware-profile diversification).
  • VerifyPrice dashboards must show both the regression and the recovery path; “we’re working on it” without numbers does not count.

In practice, VerifyPrice should be published per canonical workload in a small, consistent format – for example:

  • Percentiles: p50_verify_time_sec, p95_verify_time_sec, p50_cost_usd, p95_cost_usd.
  • Quality: failure_rate, plus adversarial-mix outcomes (e.g., how many bad artifacts were caught under a Byzantine stress test).
  • Context: optional queue_depth and avg_wait_sec so users can distinguish protocol cost from congestion.

Example (mock):

PROOF_2^20 verifies in 0.4s p50 / 0.9s p95 at $0.0003 cost per check, failure_rate = 0.0004status: healthy.
MATMUL_4096 in a PoUW pilot shows r(W) = 0.20 (verify/produce ≈ 1:5), with 0.3s p95 verify time and low variance → status: healthy.

Why this KPI matters economically

VerifyPrice is not just a performance metric; it is the economic hinge of the triad:

  • Fraud doesn’t pay. When r(W) ≪ 1, cheating is negative-carry: it costs more to fake convincing work than to do it honestly and prove it. Dishonesty becomes a bad trade.
  • Commodities emerge. Standard units (proofs, verified FLOPs, privacy capacity) can be priced as SLAs because anyone can check them cheaply. We stop buying “trust in a platform” and start buying receipts.
  • Subsidies align. Block rewards and fee markets underwrite capacities the world actually wants (proofs, verified compute, private settlement), not heat whose only purpose is to certify difficulty.

Imagine the circuit:

  1. A creator or service emits a claim – this video came from this camera; this model ran this computation; this account owns this asset.
  2. A decentralized market of provers competes to stamp that claim with succinct assurance, subject to public VerifyPrice SLOs.
  3. Payment settles privately across neutral rails that no gatekeeper can seize, leaving behind settlement receipts.
  4. Each hop (claim, proof, settlement) is verifiable by any observer without privileged access.

Nothing miraculous happens. The base layer’s default simply shifts from “believe and hope” to “verify and move on.” With that default in place:

  • contracts rely on artifacts, not relationships;
  • risk migrates from counterparty to computation;
  • and the “sacred” migrates into the protocol layer—not as mysticism, but as the quiet dignity of systems that do not and cannot lie.

In practical terms, verification asymmetry is what makes these systems economically sustainable: when checking work is cheaper than faking it, markets can price integrity, automation can scale without trust, and networks can reward real contribution instead of wasted energy.

The same logic will reappear in VerifyReach and VerifySettle for comms and settlement: if those KPIs drift, neutrality and repression-resilience fail in exactly the same way that rising VerifyPrice breaks the commodity story for proofs and compute.

19.8 Layer‑4 stress tests

Layer 4 passes its SoV audition if it survives several adversarial scenarios:

  • Circuit bloat.

    • Can we detect when proofs become too expensive to verify?
    • Is there a migration path to leaner circuits or alternative arithmetizations?
  • Prover cartel.

    • Can neutral routers and SLA/slashing mechanisms prevent a small set of proof factories from monopolizing high‑value workloads?
    • Do we have metrics on top‑N prover share and entry latency?
  • Proof system break / new attack.

    • Can we deprecate a proof system, rotate to alternatives, and quarantine affected Work Credits?
    • Are proofs tagged with enough metadata (system, parameters) to enable selective invalidation or discounts?

If Layer 4 remains cheap to verify, open to participate, and instrumented enough to handle change, then Proofs and Compute deservedly move closer to “monetary primitive” rather than “platform feature.”

19.9 Minimum viable Layer-4 economy

Part IV should not remain agnostic about where the money lives. This section commits to a reference design for Layer 4’s economic machinery—the minimum that must work for the SoV claim to hold.

19.9.1 Canonical workload starter set

The following workloads are the launch SKUs—the minimum set required for a functioning proof economy:

WorkloadIDDescriptionVerifyPrice Target (Laptop)WC Eligibility
PROOF_2^20Generic ZK proof, 2^20 constraintst95t_{95} ≤ 5sTier A (full)
PROOF_2^24Large ZK proof, 2^24 constraintst95t_{95} ≤ 30sTier A (full)
MATMUL_4096_FP32Matrix multiply 4096×4096, FP32t95t_{95} ≤ 2sTier A (full)
INFER_LM_7B_512TOK7B parameter LM inference, 512 tokenst95t_{95} ≤ 10sTier B (discounted)
INFER_LM_70B_256TOK70B parameter LM inference, 256 tokenst95t_{95} ≤ 30sTier B (discounted)
PROVENANCE_MEDIAMedia provenance chain verificationt95t_{95} ≤ 3sTier A (full)
SETTLE_ATOMICAtomic swap settlement prooft95t_{95} ≤ 5sTier A (full)

New workloads can be added via governance proposal if they meet the canonical workload template requirements (§19.3.1).

19.9.2 Tier rules for Work Credit issuance

Not all verified work is created equal. Work Credits are issued according to tier rules that price trust and verification quality:

TierVerification TypeWC Issuance WeightCollateral Grade
Tier AFull cryptographic proof (ZK-SNARK/STARK)1.0xPristine
Tier BProbabilistic verification (audited transcripts, error bounds)0.6xStandard
Tier CAttestation-backed (TEE + sampling)0.3xDiscounted

Rules:

  • Tier A only for high-assurance workloads (settlement, financial proofs).
  • Tier B acceptable for inference and compute where full ZK is impractical; must have explicit error bounds (e.g., “99% confidence, 0.1% false positive rate”).
  • Tier C for experimental or low-stakes workloads; not eligible as collateral in DeFi integrations.
19.9.3 Fee + burn + slashing logic

The economic loop that connects work to value:

┌─────────────────────────────────────────────────────────────────┐
│                    LAYER 4 ECONOMIC LOOP                        │
├─────────────────────────────────────────────────────────────────┤
│  DEMAND                                                         │
│    │                                                            │
│    ▼                                                            │
│  User requests workload W ──► Pays fee F(W) in WC-Base          │
│    │                                                            │
│    ▼                                                            │
│  Fee split:                                                     │
│    ├── 70% to prover (reward)                                   │
│    ├── 20% burned (supply reduction)                            │
│    └── 10% to protocol treasury (security budget)               │
│    │                                                            │
│    ▼                                                            │
│  Prover stakes collateral C(W) ──► If invalid proof: slash 100% │
│    │                                                            │
│    ▼                                                            │
│  Valid proof accepted ──► WC-Base minted to prover (issuance)   │
│    │                                                            │
│    ▼                                                            │
│  Net effect: demand burns + mints; high demand → net burn       │
└─────────────────────────────────────────────────────────────────┘

Fee formula (reference):

F(W)=BaseFee(W)×(1+CongestionMultiplier)×SLATierMultiplierF(W) = \text{BaseFee}(W) \times (1 + \text{CongestionMultiplier}) \times \text{SLATierMultiplier}

Where:

  • BaseFee(W) is set per workload class, adjusted quarterly based on VerifyPrice trends.
  • CongestionMultiplier rises with mempool depth (EIP-1559 style).
  • SLATierMultiplier: Bronze 1.0x, Silver 1.5x, Gold 2.5x.

Issuance cap:

  • Total new WC issuance per epoch is capped (e.g., halving schedule or capacity-linked).
  • If demand exceeds cap, issuance is pro-rated; excess fees are fully burned.
19.9.4 Neutral routing rule

To prevent “house-only flows” (proof factories routing work only to themselves):

  • House-share cap: No single entity may receive >20% of routed work in any epoch.
  • Enforcement: Neutral routers must implement round-robin or weighted-random selection across qualified provers.
  • Telemetry: House-share per router published hourly; breaches are Sev-1.
  • Slashing: Routers violating house-share cap lose routing eligibility for 30 days.
19.9.5 VerifyPrice harness requirement

Before any workload is added to the canonical set:

  1. Reference verifier implementations (≥2) must exist.
  2. VerifyPrice baseline must be established via measurement harness (§19.6.1).
  3. SLO targets must be published and achievable on Laptop-Class hardware.

No workload without a VerifyPrice baseline can mint WC-Base.


Why this matters:

This section converts Part IV from “architecture” into “auditable mechanism.” A skeptic can now ask:

  • “What workloads count?” → Starter set + template for additions.
  • “How are credits issued?” → Tier rules with explicit weights.
  • “Where does value accrue?” → Fee split + burn + collateral requirements.
  • “How is capture prevented?” → Neutral routing + house-share cap.

If any of these mechanisms fail (fees don’t burn, tiers collapse, routers capture), the SoV claim is falsified.


Tip: hover a heading to reveal its permalink symbol for copying.