§23. Extended Telemetry: VerifyPrice, VerifyReach, VerifySettle
Copy/paste (plain text):
Jason St George. "§23. Extended Telemetry: VerifyPrice, VerifyReach, VerifySettle" in Next‑Gen Store of Value: Privacy, Proofs, Compute. Version v1.0. /v/1.0/read/part-v/23-extended-telemetry/ §23. Verify* Measurement Specification
“No dashboards, no trust” is the mantra. But dashboards themselves can be gamed, cherry-picked, or memory-holed. If Layer 6 claims “measurement is the constitution,” then the measurement regime itself must be bulletproof. This section specifies how.
23.0.1 Reference verifier classes
All Verify* metrics are measured on standardized reference hardware. Without this, “VerifyPrice ≤ 5s” is meaningless—5s on what machine?
| Class | Hardware Spec | Use Case |
|---|---|---|
| Laptop-Class | 4-core CPU (≥2.5 GHz), 16GB RAM, SSD, no GPU | Default for “anyone can verify” claim |
| Mobile-Class | ARM SoC (A15/Snapdragon 8-class), 8GB RAM | Edge/mobile verification |
| Server-Class | 32-core CPU, 128GB RAM, NVMe, optional GPU | High-throughput (exchanges, proof factories) |
Rule: All constitutional SLOs (e.g., “VerifyPrice p95 ≤ 5s”) are defined against Laptop-Class unless otherwise specified. This ensures the “anyone can verify” promise is falsifiable.
23.0.2 Reference verifier implementations
| Requirement | Specification |
|---|---|
| Independent implementations | ≥2 independent verifier codebases per canonical workload tier |
| Open source | All reference verifiers must be open-source with reproducible builds |
| Versioned and hashed | Binary hashes are part of the canonical workload registry |
| Deterministic | Same proof + same verifier version → same result, always |
Why this matters: A single verifier implementation can have bugs that inflate or deflate measurements. Two independent implementations provide a cross-check.
23.0.3 Cost vector definition
VerifyPrice is not a single number. It is a cost vector for each workload W and verifier class V:
| Component | Definition |
|---|---|
| Median and 95th-percentile verification time (seconds) | |
| Peak memory usage (MB) | |
| Bandwidth consumed (bytes) | |
| Energy estimate (Joules, via hardware counters or watt-meter) | |
| Failure rate (timeouts, crashes, invalid rejections) |
USD estimate (optional): Derived from reference energy price (0.001/s), published quarterly.
23.0.4 Sampling and anti-cherry-pick
The most common attack on telemetry is selective reporting: only show receipts from good regions, fast hardware, or favorable conditions. The sampling regime prevents this.
| Mechanism | Specification |
|---|---|
| Public randomness beacon | Receipt selection seeded by on-chain randomness (e.g., block hash, VRF output) |
| Stratified sampling | Samples drawn proportionally from: workload class, hardware profile, geographic region, time-of-day |
| Adversarial corpus | 10% of test proofs are malformed or worst-case (max witness size, pathological inputs) |
| Continuous measurement | Rolling 24-hour windows; no periodic snapshots that can be gamed |
| Multi-operator | ≥3 independent measurement operators; divergence >5% triggers investigation |
Rule: Any dashboard point published without a verifiable sampling seed and stratification breakdown is not part of the official Verify* record.
23.0.5 Data availability and anti-memory-hole
Dashboards are views. The source of truth is the receipt corpus. Receipts must be:
| Requirement | Specification |
|---|---|
| Content-addressed | Receipt sets identified by Merkle root |
| Anchored | Merkle roots periodically committed to a neutral ledger (e.g., Bitcoin, Ethereum) |
| Archived | Raw receipt data retained for ≥2 years by ≥3 independent archivists |
| Queryable | Any observer can request receipts corresponding to a dashboard point |
Why this matters: If a government or CDN can delete “bad months,” the telemetry regime is captured. Content-addressing + neutral anchoring makes deletion detectable.
23.0.6 Reproducibility contract
A valid dashboard datapoint is defined as:
Validity check: Anyone can:
- Fetch receipts by (dataset root)
- Run (harness version) with (verifier version)
- Verify (result) matches
If the results don’t match, the datapoint is invalid and the discrepancy is a Sev-1 incident.
Published artifacts:
- Harness code (open source, tagged)
- Verifier binaries (reproducible builds, hashed)
- Dataset roots (timestamped, anchored)
- Aggregation scripts (deterministic)
This transforms “no dashboards, no trust” from philosophy into a reproducible measurement contract.
§23.1 Extended Telemetry: VerifyPrice, VerifyReach, VerifySettle
VerifyPrice was introduced at Layer 4; VerifyReach at Layer 1; VerifySettle at Layer 5. Layer 6 treats them as a coherent observability regime rather than three unrelated graphs.
How to measure: receipts, not vibes
The observability story only works if it is itself verifiable. That is why every claim, proof, settlement, and SLA outcome is turned into a receipt with enough structure that anyone can replay the metrics on a reference verifier.
In practice, this means:
- Every proof receipt carries not only the proof artifact but also start/end timestamps, resource usage, and a hardware profile tag.
- Settlement receipts encode corridor, timing, refund status, and failure codes.
- Routers emit anonymized samples of order books, matched and unmatched bids, and house vs. third‑party flags.
- Hardware vendors publish lot‑attestation artifacts that receipts can reference as part of their profile.
A public “VerifyPrice Observatory” then ingests these receipts and emits the aggregate vectors we refer to in the text. Anyone can pull the raw receipts corresponding to a dashboard point, feed them into the same open‑source verifier suite, and see whether they obtain the same numbers. If they do not, the mismatch is itself an incident.
This architecture enforces a simple discipline: if it matters, there is a receipt for it. If a claim about performance, neutrality, or safety cannot be tied back to receipts that anyone can re‑check, it is out of scope for the monetary thesis.
They are the three primary dials for answering:
- Is verification still cheap? (VerifyPrice)
- Can users still reach the network under pressure? (VerifyReach)
- Can users still move value privately and safely? (VerifySettle)
We can think of this observability regime as watching the stack across five planes:
- Proof & compute plane – Are receipts reliable, and does verification asymmetry actually hold under load?
- Settlement & privacy plane – Do non-custodial payouts actually clear, with privacy and refund-safety matching the promise?
- Neutrality & admission plane – Can new provers, miners, routers, and mirrors join on equal footing, or has the system ossified into a club?
- Economic coverage plane – Are real fee flows, not issuance, paying a growing share of the security and operations budget?
- Layer-0 hardware plane – Do “verifiable machines” remain falsifiable in practice, or did we slide quietly back into “trust the vendor”?
Each plane maps directly to the SoV requirements from Part II: credible scarcity, cheap public verification, censorship-resistance, neutrality, native demand, lawful privacy, and duration-neutrality. If any plane drifts too far from its targets, the monetary thesis weakens, no matter how elegant VerifyPrice looks in isolation.
23.1 Definitions and target SLOs
VerifyPrice(W) for a workload (W) is defined as:
- p50/p95 verification time on reference hardware profiles.
- p50/p95 energy and effective fiat cost of verification.
- Failure and mismatch rates (how often proofs fail to verify).
- Diversity of verifiers (how many distinct profiles and jurisdictions are actually verifying).
Target SLOs might look like:
- “For MatMul workloads up to dimension N, p95 verify time ≤ 5 seconds on commodity hardware and ≤ 1 second on high‑end profiles.”
- “Verification can be performed by at least K distinct hardware profiles across at least J jurisdictions.”
VerifyReach measures:
- The fraction of vantage points (ISPs, ASNs, countries) from which key services (full nodes, corridors, proof factories, routers) are reachable.
- Degradation patterns: which networks experience blocking, throttling, or high failure rates.
Target SLOs:
- “Core infrastructure reachable from ≥ X% of sampled ASNs.”
- “No country‑level view sees more than Y% persistent reachability degradation without triggering incident handling.”
VerifySettle measures:
- Settlement success rates (by corridor, asset pair, size bucket, and time window).
- Refund outcomes and times.
- Anonymity‑set indicators for shielded pools and privacy flows.
- LP concentration and corridor centralization.
Target SLOs:
- “Corridors must achieve ≥95% p90 success rates with 100% enforceable refunds.”
- “Shielded pools maintain anonymity sets above threshold A with churn above C per time window.”
- “No single LP or provider controls >Z% of corridor volume.”
These SLOs are not exhaustive, but they make what the protocol promises explicit.
In practice, these metrics surface as four public “boards” rather than a grab-bag of charts:
-
Proof & Compute Board. Summarizes VerifyPrice per workload, SLA attainment for Bronze/Silver/Gold tiers, queue depths, and failure modes. Red flags include p95 verification times drifting above target bands (for example, >10 s for canonical proofs), verification overhead r(W) creeping above its bound (e.g., 0.3), or failure rates staying above a small threshold (≈0.5%) for extended periods under honest load.
-
Settlement & Privacy Board. Reports swap success and refund safety by corridor, p50/p95 time-to-finality, anonymity-set health in shielded pools, and route diversity. Invariants are simple: success ≥ a published target (e.g., 95%) and refunds 100% safe. Any breach of refund safety automatically removes the corridor from admissible routes until a fix ships and is accompanied by a candid post-mortem.
-
Neutrality & Admission Board. Tracks time-to-first-proof (or first filled job) for new entrants, top-N share and Nakamoto coefficients for provers/miners/routers/mirrors, geo/ASN distribution, and router “house share” versus third-party fills. Fairness plots compare fill rates for house vs. third-party bids at equal price/SLA. A network that claims permissionlessness but never actually onboards new suppliers shows its hand here.
-
Economic Coverage Board. Charts the share of the security budget covered by fee+burn rather than issuance, physical VerifyPrice (time/energy/memory on reference hardware) for canonical workloads against constitutional SLOs, VerifyAffordability (verification cost as % of median transaction fee), and demand curves for proof/compute/settlement indexed to policy and macro events (e.g., provenance mandates, censorship episodes, real-yield shocks). Token-denominated VerifyPrice is reported as a market signal but is not a constitutional target (see §19.7.1–19.7.3). This is the board allocators and treasuries stare at when deciding whether the stack deserves a store-of-value premium.
These boards are not marketing pages; they are part of the protocol’s public interface, as important as RPC endpoints or client binaries.
23.2 Detecting drift: centralization, censorship, degraded verification economics
Telemetry is not just to decorate dashboards; it is to detect drift.
Examples:
-
Verification cost creep.
- VerifyPrice shows p95 verify time for workload W creeping up 3–5x over six months.
- Possible causes: circuit bloat, client inefficiencies, proof‑system misconfiguration.
- Response: Layer‑6 triggers review; R&D lab proposes fixes; on‑chain governance ratifies circuit or parameter changes.
-
Verifier concentration.
- 80–90% of verifications are now performed by one hardware profile in one jurisdiction.
- Response: highlight in dashboards; adjust rewards to subsidize diverse profiles; consider de‑weighting over‑concentrated profiles in Work Credit issuance.
-
Reachability degradation.
- VerifyReach shows a major country’s mobile networks blocking common protocol ports.
- Response: shift default transports (obfuscation, domain fronting), promote alt routes (satellite, mesh), update client defaults.
-
Corridor fragility.
- VerifySettle shows BTC↔ZEC corridor success dropping and refund events spiking after new regulations.
- Response: re‑weight corridors; onboard new LPs; adapt routing and provide warnings to users and treasuries.
Drift is inevitable. The point of Layer 6 is to make it visible early and actionable.
Stress harnesses: rehearsing repression
The repression stress tests introduced earlier (YCC shocks with −300 to −500 bps real yields, on‑/off‑ramp squeezes, peg breaks) become more than thought experiments once we have observability. We can turn them into a stress harness that runs periodically, simulating or inducing worst‑case conditions and publishing the before/after deltas across all planes:
- VerifyPrice drift.
- Swap success and refund safety.
- Anonymity‑set stability.
- Fee‑coverage ratios.
- Entry latency.
- Router house share.
This is not merely academic. A store‑of‑value thesis that explicitly aims to survive repression must be willing to demonstrate that survival under controlled stress. Running the harness and publishing the receipts is how we graduate from “it should work” to “here is how it behaved the last N times we tried to break it.”
(The four public boards—Proof & Compute, Settlement & Privacy, Neutrality & Admission, and Economic Coverage—are defined above in §23.1.)
23.3 Making neutrality and repression‑resilience falsifiable
Neutrality and repression‑resilience are usually sold as moral properties. In this thesis, they are falsifiable hypotheses:
-
Neutrality hypothesis: “No actor or group can reliably censor, privilege, or front‑run specific flows across Layers 1–5 without being detected and without other actors being able to route around them.”
-
Repression‑resilience hypothesis: “Under YCC, capital controls, DPI, and blacklists, the triad’s core capacities (Privacy, Proofs, Compute) remain usable enough that Work Credits and derived instruments still behave as stores of value.”
Telemetry makes these hypotheses testable:
- If VerifyReach shows certain flows consistently blocked and no alternative routes emerge, neutrality is falsified.
- If VerifySettle shows privacy corridors collapsing under regulation, repression‑resilience is falsified.
- If VerifyPrice shows verification cost pushed beyond reach of ordinary verifiers, “anyone can verify” is falsified.
Layer 6 forces the system to face those falsifications rather than smoothing them over.
- A system that cannot tolerate bad news cannot be money; it can only be marketing.
- A system that keeps publishing bad news until it fixes the underlying issue has a shot at being money.
Observability beyond VerifyPrice is not a cosmetic UX layer. It is the mechanism by which the triad remains falsifiable, and therefore the mechanism by which it can credibly claim to be money. Without receipts and public dashboards, even the cleanest cryptography and the most elegant PoUW construction collapse back into the very regime this thesis is trying to leave behind: soft guarantees, platform promises, and hope. With them, “don’t trust, look at the receipts” becomes a literal design principle rather than a slogan.
23.4 Policy-Attack Stress Harness (turning repression into tests)
A monetary stack designed for an era of repression should not merely hope to survive; it should publish the exact conditions under which it considers itself to be passing or failing.
Earlier we sketched repression stress-tests (YCC shocks, on-/off-ramp squeezes, provenance mandates, hardware backdoor pressure). Here we tie them directly to Verify* boards and SLOs, so they can be run as repeatable harnesses rather than as thought experiments.
Think of this as a small table of “attack → test → pass criteria”:
-
Yield-curve control & negative real yields
-
Attack: Sovereigns hold nominal rates below inflation for years, imposing −300 to −500 bps real yields and incentivizing repression over explicit default. Duration instruments bleed quietly.
-
Test: Over a 24–36-month window of such conditions:
-
On the Economic Coverage Board, the share of the security budget covered by fee+burn (proof/compute/settlement fees) is rising or stable, not collapsing back to pure issuance.
-
On the Proof & Compute Board, VerifyPrice p95 for canonical workloads stays within published thresholds (for example, p95 verify time < 5 s on reference hardware; r(W) ≤ 0.3).
-
-
Pass: Tokens behave like claims on useful capacity (proofs, verified FLOPs, privacy rails), not like synthetic bonds whose real yield can be driven negative by decree.
-
-
On-/off-ramp throttling
-
Attack: Custodial exchanges and banks de-list privacy assets or triad instruments; KYC gates tighten; fiat ramps become sporadic or expensive.
-
Test: On the Settlement & Privacy Board:
-
Non-custodial BTC↔ZEC/XMR corridors still achieve ≥ 95% success and 100% refund-safety under a mix of sizes and regions.
-
Anonymity-set size and churn in shielded pools remain above thresholds (no collapse to a handful of active notes).
-
VerifyReach shows that users in constrained jurisdictions can still reach at least one corridor path.
-
-
Pass: Users can still move capital and pay for proofs/compute via non-custodial rails, even if custodial ramps are hostile. The rail behaves like infrastructure, not like a permissioned product.
-
-
Provenance & AI policy mandates
-
Attack: Regulators require provenance for certain classes of media or AI outputs (e.g., critical election footage, safety-critical model outputs) and push platforms to enforce compliance.
-
Test: On the Proof & Compute Board and provenance-specific panels:
-
Canonical Provenance(C, chain) workloads have healthy VerifyPrice (cheap to verify) and low failure rates.
-
A measurable fraction of high-risk content flows carry PIDL receipts that tie back to known hardware profiles and model commits.
-
Courts, regulators, or platforms can run
verify(receipt)with commodity tooling, independent of any single vendor.
-
-
Pass: The triad is the cheapest way to comply with provenance mandates; proofs are commoditized capacity, not bespoke SaaS offerings.
-
-
Hardware backdoor mandates
-
Attack: Governments push “secure enclaves” or HSMs that ship with vendor- or state-controlled attestation keys and seek to make them mandatory for key storage, signing, or proving.
-
Test: On the Hardware/Layer-0 panels and Neutrality Board:
-
A non-trivial share of receipts (by volume and value) are tied to open or sampled hardware profiles (
open_tee_v2,pure_zk_only, open_camera profiles), not to a single opaque TEE family. -
For high-sensitivity workloads, policies can and do specify
allowed_hardware_profiles=["pure_zk_only", "open_tee_v2"], and PaL/PFS/VCO can satisfy those policies at reasonable VerifyPrice. -
Deprecation of a compromised profile is visible (incident reports, profile status flipping to “deprecated”) and workloads migrate to healthier profiles within bounded time.
-
-
Pass: Hardware mandates become one option among many, not a single kill switch. Applications can route work onto verifiable machines or pure-ZK paths without rewriting from scratch.
-
Each of these scenarios has a runbook:
-
Simulate or observe the attack (e.g. using region filters, policy toggles, or incident logs).
-
Record before/after on the four boards (Proof & Compute, Settlement & Privacy, Neutrality & Admission, Economic Coverage).
-
Publish receipts and incident reports so others can audit the test and its outcome.
When the stack claims “we can survive policy attacks,” the proof is not a blog post; it is a corpus of stress-harness runs with metrics and receipts. If those runs stay green through YCC, corridor squeezes, provenance mandates, and hardware pressure, the SoV thesis has teeth. If they go red and stay red, the thesis has failed, regardless of how elegant the cryptography looked on day one.
Tip: hover a heading to reveal its permalink symbol for copying.