§14. Layer 0 – Verifiable Machines & Energy
Copy/paste (plain text):
Jason St George. "§14. Layer 0 – Verifiable Machines & Energy" in Next‑Gen Store of Value: Privacy, Proofs, Compute. Version v1.0. /v/1.0/read/part-iii/14-layer-0/ §14. Layer 0 – Verifiable Machines & Energy
Layer 0 is where cryptography stops being metaphor and touches matter.
PoW had an implicit Layer 0: silicon, power, and warehouses hashing in the dark. The trust assumption was: “ASICs will do what the SHA‑256 spec says.” In practice that meant: “we trust the vendor, the fab, the firmware, and the power company, and we hope no one has a better ASIC they haven’t told us about.”
That was barely acceptable for useless work. It is not acceptable when the puzzle mints receipts, not heat; when proofs and verified compute become monetary primitives.
This Section makes Layer 0 explicit:
- It explains why “trust the vendor” is the cliff edge for the whole triad.
- It defines design goals and non‑goals for verifiable machines.
- It sketches concrete mechanisms: open designs, hardware profiles, sampling, attestations, and power telemetry.
- It shows what Layer 0 exports: the guarantees and artifacts higher layers consume when they mint Work Credits or publish VerifyPrice.
14.1 Why Layer 0 is a monetary question
Gold worked as money because geology is hard to fake at scale. Bitcoin worked because hashing cost (chip design, fab capacity, power, time) was hard to fake at scale. In both cases, the monetary story rested on silent assumptions:
- Rocks behave;
- Fabs behave;
- Physics behaves.
The triad inherits those assumptions and tightens them:
- Privacy is only as real as the devices that hold keys and speak on the wire.
- Proofs are only as real as the machines that generate entropy, execute circuits, and sign receipts.
- Compute is only as real as the GPUs/ASICs that claim to have run workloads.
If those machines are opaque, remotely steerable, or quietly biased, Work Credits degrade into theater:
- A compromised RNG can turn “unpredictable leader election” into a slow rug‑pull.
- A backdoored prover can emit “valid” proofs that leak secrets to its designer.
- A mandated TEE can become a kill switch for entire clusters of provers and routers.
All of this math still runs on matter.
Every proof, every encrypted wallet, every verified FLOP ultimately lives on a sliver of doped silicon that almost nobody is allowed to audit. Today’s “trusted hardware” stack is a daisy chain of NDAs: closed‑source EDA tools, proprietary IP blocks, opaque PDKs, black‑box fabs, sealed packaging, vendor‑run attestation services. Somewhere between RTL and running chip there are a dozen places where one compromised actor can introduce a trojan, weaken randomness, or exfiltrate keys, and the best cryptography in the world will happily prove whatever that chip tells it.
If the machine can lie, the proof becomes theater.
From a monetary standpoint, Layer 0 asks:
“Can we treat triad capacity as collateral if we don’t know what the machines are really doing?”
The answer is “no.” A triad backed by machines we cannot interrogate is not a reserve asset; it is an IOU on a hardware cartel plus the jurisdictions that regulate it.
Three uncomfortable facts follow:
-
Cryptography inherits the honesty of the chips it runs on. A wallet whose random numbers are biased by a hardware backdoor is secure on paper and compromised in practice. A ZK prover that secretly keeps trapdoor keys turns “succinct proofs” into forged receipts. A TEE that can be silently opened by its manufacturer is just a remote admin panel in fancy clothes.
-
Closed hardware centralizes trust, even when protocols look decentralized. If most proving, mining, and inference capacity depends on a handful of proprietary TEEs, GPUs, or accelerators, then effective control sits with the vendors and their home jurisdictions. Monetary rails that are neutral in protocol space can be vetoed in fab space.
-
Supply‑chain opacity is a monetary risk, not just an infosec nuisance. When sovereigns are under debt stress, the temptation is not merely to censor transactions but to mandate “secure enclaves” and backdoored HSMs in the name of safety. If the only viable hardware is opaque, the repression tooling can be installed below the protocol line and sold as a security upgrade.
So the mandate of Layer 0 is:
Translate “trust the vendor” into “trust these verifiable claims about the machine and its power,” or don’t pretend it’s money.
For Privacy, this means that without open hardware the hull can always be drilled from below: side-channel-heavy designs, debug modes that never truly die, firmware paths that bypass keys you thought were cold.
For Proofs, it means “cheap verification” can be undone by a single secret: a chip that quietly recognizes and accepts bogus artifacts.
For Compute, it means that useful-work mining or verified inference can silently degrade into “whoever owns the weirdest proprietary accelerator wins.”
14.2 Design goals and non‑goals
Layer 0 has to be ambitious enough to matter and humble enough not to LARP full supply‑chain omniscience.
Goals
-
Verifiability over purity. We aim for checkable claims about machines, not for metaphysical purity. Open RTL where we can; structured sampling where we cannot.
-
Common knowledge of security. Different actors (miners, provers, treasuries, regulators) should be able to agree on facts about hardware profiles (“this chip family has been sampled at rate R; side‑channel leakage is bounded by X”), even if they disagree about policy.
-
Energy anchoring, not energy worship. Power use should be measurable enough that “Work Credit per joule” is meaningful; the goal is not maximal burn, but verifiable linkage between energy, work, and proof.
-
Degradability under attack. When assumptions fail (a TEE breaks, a fab is compromised), the system should degrade visibly: telemetry spikes, profiles are deprecated, Work Credits tied to broken profiles are hair‑cut or quarantined.
-
Composable exports. Layer 0 should emit artifacts (attestations, profiles, telemetry feeds) that higher layers can consume mechanically (PaL, workload registries, VerifyPrice dashboards) without manual lore.
Non‑goals
-
Perfect trustlessness. We will not “solve” global hardware and supply chains. We aim to reduce trust to sampled, explicitly documented assumptions.
-
Single‑vendor dependence. No design should require that one chip vendor, cloud, or TEE be honest forever. Heterogeneity is a feature, not a bug.
-
Total hardware transparency on day one. Political and commercial realities exist. Layer 0 tolerates partially closed hardware if the opacity is bounded and surrounded by sampling, attestation, and migration paths.
-
Magical protection against all side‑channels. We assign budgets to attack surfaces; we don’t pretend they vanish.
14.3 Hardware as base reality for Work Credits
In Part II, Work Credits were defined as claims on standardized units of triad work (privacy settlement, proofs, verified compute) anchored to energy and VerifyPrice.
Layer 0 defines the hardware profile that each Work Credit type rests on. For a canonical workload (W), a hardware profile (H) might specify:
- Chip family and stepping.
- Microarchitectural features (e.g., presence of certain accelerators or TEEs).
- RNG source and test regimen.
- Power metering and thermal envelope.
- Known limitations (e.g., “avoids TEE X due to backdoor Y; uses open core Z instead”).
When a Work Credit of type (W,T) is minted, the receipt can say:
“This unit of work was performed on hardware profile H under conditions C, with proof P and VerifyPrice statistics V.”
Monetarily, that matters because:
- Profiling makes hardware risk priced instead of hidden. Credits from “sketchy profile H’” can trade at a discount.
- It lets different actors pick their risk tolerance: some will only hold Work Credits linked to fully open cores; others will accept mixed profiles in exchange for lower cost or higher performance.
Layer 0’s job is not to tell everyone what risk to take; it is to make the risk legible and instrumentable.
14.3.1 The Layer 0 Feasibility Ladder
A common objection to Layer 0 is: “Open silicon and sampled supply chains sound like moonshots. What can we actually do this decade given real-world fabs, opaque GPU stacks, and geopolitical constraints?”
The answer is a graded trust ladder. Layer 0 does not require perfection on day one; it requires measurable progress and falsifiable claims at each grade. Higher grades provide stronger guarantees; lower grades are acceptable for less sensitive workloads, with telemetry that detects when you’re relying on weaker grades.
| Grade | Name | What It Means | Achievable When | Trust Residual |
|---|---|---|---|---|
| L0-A | Best Available Today | Attestation + multi-party audits + reproducible benchmarking + diversity requirements. Use existing hardware (commodity CPUs, GPUs, FPGA/ASICs) with layered mitigations. | Now | Trust vendor + third-party auditors; side-channel budget measured but not minimized. |
| L0-B | Sampled & Bounded | Lot sampling + imaging + side-channel budgets + independent lab inspections. Statistical confidence that X% of units match claimed specs. | 1–3 years | Trust sampling methodology; residual risk is unsampled units and sophisticated supply-chain attacks. |
| L0-C | Partial Open | Open RTL for critical components (CPU cores, RNG, key storage); open firmware; proprietary accelerators wrapped with proofs. | 2–5 years | Trust fab + packaging; open logic is inspectable; closed accelerators bounded by interface proofs. |
| L0-D | Fully Open (where possible) | Fully open designs + open PDKs + multi-fab production. End-to-end inspectable from RTL to packaged chip. | 5–10+ years, jurisdiction-dependent | Trust physics + fab process; no single-vendor chokepoint. |
Key principles:
-
Grades are explicit. Every hardware profile is tagged with its Layer 0 grade. Work Credits, VerifyPrice dashboards, and SLA tiers reference these grades so users know what trust assumptions they’re accepting.
-
Telemetry detects reliance on weaker grades. If 80% of proving capacity is L0-A (vendor-attested) and only 5% is L0-C (partial open), that concentration is visible in dashboards. Users can decide whether to hold Work Credits tied to such a distribution.
-
Higher grades earn lower risk premiums. Markets should price Work Credits from L0-D profiles more favorably than L0-A profiles, creating economic gravity toward openness as it becomes available.
-
Migration paths are first-class. When L0-B or L0-C options become viable, there are documented procedures to migrate workloads off L0-A profiles without catastrophic downtime.
-
Failure modes are bounded. If a specific grade is compromised (e.g., a TEE used in L0-A profiles is broken), the impact is contained to that grade. Higher-grade capacity continues to function; affected Work Credits are quarantined or hair-cut.
Policy Hooks: Economic Treatment by Grade
The following table specifies how each L0 grade affects Work Credit eligibility, pricing, and collateral treatment. These are reference parameters; implementations may adjust within bounds.
| Grade | WC Eligibility | Issuance Weight | Collateral Haircut | VerifyPrice Tier |
|---|---|---|---|---|
| L0-A | All workload types; capped at 60% of total issuance | 1.0x (baseline) | 0% (if diversified) to 15% (if concentrated) | Reported separately; flagged if >70% of measurements |
| L0-B | All workload types; no issuance cap | 1.0x | 0% | Standard reporting |
| L0-C | All workload types; eligible for “open-profile” premium tier | 1.1x (bonus for open hardware investment) | 0%; may qualify as “pristine collateral” | Premium tier; highlighted in dashboards |
| L0-D | All workload types; highest trust tier | 1.2x | 0%; pristine collateral | Gold tier; used as reference baseline |
Interpretation:
- Issuance weight: Higher grades earn more WC per unit of verified work, incentivizing investment in open hardware.
- Collateral haircut: When WC are used as collateral (e.g., for staking, LP positions, or DeFi), lower grades may face haircuts. L0-A collateral is discounted if a single vendor/TEE dominates.
- VerifyPrice tier: Dashboards stratify measurements by L0 grade. If most verification runs on L0-A hardware, this is flagged as concentration risk.
- Issuance cap (L0-A only): To prevent over-reliance on vendor-attested hardware, L0-A profiles are capped at 60% of new issuance. Excess work at L0-A earns priority for future slots but not extra tokens.
Quantitative thresholds (reference design):
| Metric | Threshold | Consequence if Breached |
|---|---|---|
| L0-A share of issuance | >60% | New L0-A issuance paused until ratio falls |
| Single vendor share within L0-A | >40% | Affected profiles moved to “watch list”; 5% collateral haircut |
| Sampling coverage for L0-B | <80% of units per quarter | Grade downgraded to L0-A until coverage restored |
| Open-source audit currency (L0-C/D) | >12 months since last audit | Grade downgraded one level |
These thresholds are governance parameters, adjustable via protocol upgrade with supermajority. Changes are announced 90 days in advance and visible in dashboards.
Pragmatic starting point (L0-A):
Today’s stack can achieve L0-A by combining:
- Multi-vendor diversity: No single chip family or TEE dominates more than X% of critical capacity.
- Third-party attestation audits: Independent labs verify that attestation claims match device behavior.
- Reproducible firmware and benchmarks: All prover/router firmware is built reproducibly; benchmark results are publicly verifiable.
- Side-channel measurement: Known side-channel leakage is measured and published as a “leakage budget” per profile.
- Cryptographic agility: Profiles document which primitives can be upgraded via firmware vs. require hardware swap.
This is not trustless. It is trust-bounded and measured. The residual trust is explicit: “We trust these vendors, these auditors, and this sampling methodology, and here is the evidence.”
The path forward:
- L0-A → L0-B: Fund independent lot-sampling programs. Publish inspection results. Build statistical models of detection confidence.
- L0-B → L0-C: Invest in open RISC-V cores, open RNG designs, open firmware stacks. Wrap proprietary accelerators with proof interfaces so their internal logic doesn’t need to be trusted.
- L0-C → L0-D: Support open PDK initiatives. Diversify fab sources across jurisdictions. Build ecosystem gravity so open designs become economically competitive.
Why this prevents “Layer 0 is impossible, therefore thesis fails”:
The thesis does not require L0-D today. It requires:
- Clear grading of what trust assumptions each profile carries.
- Telemetry that makes those assumptions visible.
- Economic and technical paths toward stronger grades over time.
- Failure modes that degrade visibly, not silently.
If these four conditions hold, Layer 0 becomes a progress metric rather than a moonshot prerequisite. The stack can operate today at L0-A while building toward L0-C/D, and users can decide what risk they’re willing to accept at each stage.
14.4 Concrete components of Layer 0
Layer 0 is not a single device. It is a bundle of practices and mechanisms that give higher layers a surface to stand on. Four components dominate.
14.4.1 Open designs where possible
The gold standard is open cores and toolchains:
- Open RTL or microarchitectures for CPUs/accelerators.
- Open PDKs where geopolitical and commercial conditions allow.
- Reproducible build infrastructure for firmware and low‑level runtimes.
This does not mean “only open hardware is allowed.” It means:
- Where open options exist, they are first‑class citizens in hardware profiles.
- Workload registries and PaL can prefer open profiles when compiling circuits or assigning work.
- Telemetry distinguishes “open‑profile work” from “closed‑profile work,” rather than lumping them.
On the margin, this creates economic gravity: as open hardware matures, Work Credits tied to open profiles should command a lower risk premium and deeper adoption.
14.4.2 Attested randomness and entropy
Randomness is the hidden spine of consensus and proofs. A biased RNG can:
- Leak signing keys.
- Predict leader election.
- Make PoUW “fairness” an illusion.
Layer 0 requires:
- Hardware RNG designs that are documented and testable (e.g., ring oscillators, metastability, not opaque black boxes only accessible via vendor firmware).
- Attested entropy tests: periodic statistical test suites whose results are folded into hardware profiles.
- Blended randomness: protocol‑level insistence that no single RNG source decides critical entropy, mixing hardware entropy with VRFs, commit‑reveal, and cross‑machine aggregation.
From a triad perspective, entropy health is a Layer‑0 metric that feeds Layer‑4 security and Layer‑6 telemetry.
14.4.3 Attestation without priesthood
Modern hardware ecosystems push TEEs and attestation as the answer to everything. Used naively, they simply move “trust the vendor” into “trust the vendor’s signing key and the bureaucracy behind it.”
Layer 0’s posture is:
- TEEs and hardware attestation are useful tools, not root of trust.
- Attestations should be wrapped and sampled, not taken as gospel.
Concretely:
- Devices can emit local attestations: “I am hardware profile H, running firmware F, with measurements M.”
- Layer 0 wraps these statements in SNARKs or STARKs where possible, so that higher layers can verify them efficiently without exposing all details.
For example:
“Here is a succinct proof that: (a) this device’s measurement matches a whitelisted profile hash set; (b) its firmware image passed reproducible build checks; (c) it is running the reference prover binary.”
We do not assume that every device is attested or honest. Open hardware without proofs is artisanal (inspectable only by those close enough to touch the metal). Proof systems without open hardware are speculative (beautiful on paper but ultimately vouching for machines that might be lying).
Instead:
- Attestation is tied to lot sampling (see below).
- Higher layers can refuse Work Credits from profiles that fail attestation audits or whose attestation keys are compromised.
14.4.4 Lot sampling and destructive audits
Because we cannot open every chip, Layer 0 leans on lot sampling:
- For each hardware profile, a fraction of units are randomly selected for deep inspection: decapping, imaging, side‑channel probing.
- Results are published as part of the profile’s dossier: “N out of M sampled; no backdoors found; side‑channel leakage below budget X under test suite Y.”
Destructive sampling is expensive, so:
- It focuses on high‑impact profiles (e.g., chips used by major proof factories and corridor relays).
- It is financed by protocol‑level allocations or Work Credit inflation earmarked for Layer‑0 assurance.
This is how Layer 0 turns “we hope the vendor is honest” into:
“We have inspected a statistically relevant sample of this profile, and here are the findings and residual risks.”
It is not proof of purity. It is evidence, which is all you can ever get in hardware.
14.4.4.1 Sampling economics and sufficiency
Critics will ask: “How much sampling is enough? Who pays? What confidence do we actually get?”
Sample rate model (reference design):
| Profile Criticality | Definition | Minimum Sample Rate | Funding Source |
|---|---|---|---|
| Tier 1 (Critical) | Top 5 profiles by WC issuance; >10% of total capacity | ≥50 units/quarter | Protocol assurance budget (2% of fee revenue) |
| Tier 2 (Standard) | Profiles with 1–10% of capacity | ≥20 units/quarter | Protocol assurance budget |
| Tier 3 (Emerging) | New profiles in probation; <1% of capacity | ≥5 units/quarter | Profile sponsor (operator or vendor) |
Statistical sufficiency:
For a profile with N deployed units, sampling n units provides confidence that:
For Tier 1 profiles (n=50, k=5%), detection probability is ~92%. This is not certainty—it is bounded uncertainty, documented and priced.
What “Gold tier” requires:
A profile achieves Gold tier (L0-B or higher) when:
- ≥80% of deployed units have been covered by sampling programs over the profile’s lifetime.
- No critical findings (backdoors, RNG bias >threshold) in the last 4 quarters.
- Side-channel leakage remains within published budget under independent testing.
- At least 2 independent labs have conducted inspections.
Gold tier unlocks:
- Full WC eligibility (no issuance cap).
- Zero collateral haircut.
- Eligibility for “pristine collateral” designation in DeFi integrations.
Funding mechanism:
The protocol allocates 2% of fee revenue to a Layer 0 Assurance Fund. This fund:
- Contracts with independent hardware security labs.
- Publishes RFPs for sampling campaigns.
- Maintains a public registry of inspection results.
- Is governed by a multisig of hardware security researchers (not protocol developers).
Transparency:
All sampling results are published within 30 days. Raw data (images, test logs) is archived and available for independent verification. If a lab’s findings are disputed, a second lab can be commissioned for arbitration.
14.4.5 How this opens new economies
Verifiable machines don’t just make today’s cloud slightly less sketchy. They unlock whole categories of economic arrangement that aren’t viable when hardware is a black box.
Verifiable cloud and compute co‑ops. If you can spin up a rack of open‑design TEEs or accelerators and have them produce attestations and ZK receipts that anyone can verify, then:
- A small data center in Nairobi or Reykjavík can sell the same class of “trusted inference” or “trusted proving” as a hyperscaler in Virginia.
- A co‑op of households can pool “AI appliances” in their basements and earn by running verified work for others.
- Regulators and enterprises can enforce compliance through proofs and telemetry, not vendor logos.
Compute ceases to be a winner‑takes‑all brand game and becomes a commodity with open admission.
Civic infrastructure that doesn’t depend on one vendor’s conscience. Voting machines, digital ID kiosks, public‑health dashboards: today they are RFPs to a short list of contractors. With open hardware and proof‑wrapped attestation, you can build ballot boxes and ID hardware whose entire stack is open to public inspection and require that each device emit public proofs of correct behavior. Democratic legitimacy becomes a property of math and sampling, not of which vendor’s logo sits on the plastic.
As the physical world fills with sensors and actuators, the ability to sell tamper-evident telemetry becomes critical. An environmental sensor built on an open profile, with attested firmware and ZK-wrapped readings, can sell CO₂ or temperature data as evidence into climate, insurance, and industrial hedging markets, not just as numbers on a dashboard.
Hardware‑native monetary instruments. Once machines themselves are verifiable actors, we can imagine “miner‑notes” backed by the future output of a specific open‑design proving farm, or municipal bonds whose coupon is denominated in SLA‑backed capacity: X proofs per second, Y verified FLOPs, Z private swaps per month, all attested by open hardware meters. These become new kinds of collateral: claims on capacity the world must keep buying.
Open profiles manufactured at multiple fabs in multiple jurisdictions turn chip supply into a multi-polar fabric instead of a single chokepoint. Neutral money needs neutral hardware; verifiable machines produced on a diversified manufacturing base make that a reachable design goal rather than a slogan.
14.5 Verifiable power: energy as a first‑class input
If Work Credits are energy‑anchored claims on triad work, then power is not just an environmental footnote; it is part of the monetary base.
Layer 0 treats power in three ways:
-
Measurement. Prover farms, corridor relays, and key infrastructure track:
- Power draw over time at reasonable resolution.
- Mapping between workloads and power (e.g., kWh per MatMul at size n).
- Local generation vs. grid intake (e.g., % renewable, % diesel).
These measurements are folded into Work Credit metadata and VerifyPrice models. The goal is not ESG virtue signaling but pricing: how much real resource did this verified work consume?
-
Resilience. A repression toolkit can include planned blackouts and energy rationing. Layer 0 encourages:
- Micro‑grids or backup generation for critical proving and routing infrastructure.
- Geographic and jurisdictional dispersion of power sources.
- Explicit Layer‑0 stress tests: “what fraction of triad capacity survives a 72‑hour national‑grid outage in jurisdiction J?”
-
Policy hedging. Energy regimes are political. Some jurisdictions may tax or ban “wasteful” compute. Having transparent power telemetry allows:
- Answering “how much of this is actually training/inference/proof that humans pay for?” vs “pure heat.”
- Making a case that PoUW is not gratuitous burn but a priced input into verifiable services.
For Work Credits, this means:
- Credits can carry energy provenance tags (“this was produced on renewables vs. mixed grid”).
- Markets can price credits differently based on energy or location without protocol changes.
14.6 Operational patterns: profiles, upgrades, and failure modes
Layer 0 is not static; hardware evolves, breaks, and gets deprecated. We need patterns for living with that churn.
14.6.1 Hardware profiles and workload binding
Each canonical workload (W) in Layer 4 is associated with one or more acceptable hardware profiles (H_1, H_2, …). Profiles define:
- Minimum performance characteristics (to keep VerifyPrice in target bands).
- Acceptable side‑channel leakage budgets.
- Attestation/sampling histories.
- Known caveats (“avoid profile H3 for workloads with secret inputs; leaks are too strong”).
When PaL compiles a workload or the router assigns work, it can:
- Target specific profiles (e.g., “only open cores”).
- Diversify across profiles (for redundancy).
- Refuse to assign high‑sensitivity workloads to marginal profiles.
This is how Layer 0 informs the market rather than hiding under it.
14.6.2 Upgrades and deprecations
Hardware ages; bugs and backdoors are discovered; fabs change hands. Layer 0 needs clear life‑cycle rules:
-
Onboarding. New profiles go through a probation period with extra sampling and conservative Work Credit weights.
-
Deprecation. When a profile is compromised or obsolete, higher layers:
- Stop accepting new Work Credits minted from it.
- Quarantine or hair‑cut existing credits tied to it (depending on severity and whether historical work is still trustworthy).
- Publish an incident report at Layer 6 describing the issue and decisions.
-
Migration. Proof factories and corridor operators need technical paths to migrate workloads off deprecated profiles without massive downtime.
In monetary terms, this is the hardware analog of a bond downgrade: transparent, describable, and priced, rather than silently swept under the rug.
14.6.2.1 Non-discretionary downgrade rules
To prevent “ex post discretionary default” critiques, deprecation and haircut decisions follow a predictable severity framework, not ad hoc governance.
Severity Levels:
| Severity | Definition | Examples |
|---|---|---|
| S0 (Watch) | Potential issue; under investigation | Anomalous side-channel readings; unverified third-party report |
| S1 (Warning) | Confirmed issue with limited impact | RNG bias below 1%; isolated firmware bug; single-sample finding |
| S2 (Critical) | Confirmed issue with systemic impact | Backdoor in >5% of sampled units; key extraction demonstrated; vendor non-cooperation |
| S3 (Emergency) | Active exploitation or catastrophic risk | Widespread key compromise; active attacks using vulnerability; vendor collusion confirmed |
Automatic Responses by Severity:
| Severity | Issuance | Existing WC | Collateral | Timeline |
|---|---|---|---|---|
| S0 | Continue; profile flagged in dashboards | No action | No action | Review within 30 days |
| S1 | Reduced weight (0.8x); new issuance capped | No action | 5% haircut for new collateral posts | Resolve within 90 days or escalate |
| S2 | Suspended | Quarantined (cannot be used as collateral or for fees until resolution) | 15% haircut | Migration path published within 14 days |
| S3 | Permanently suspended | 25% haircut applied; remaining value redeemable after 180-day sunset | Ineligible as collateral | Immediate action; incident report within 7 days |
Process guarantees:
-
Evidence threshold: Severity levels require specific evidence types:
- S1: Independent lab confirmation or 2+ credible reports.
- S2: Reproducible demonstration by qualified auditor.
- S3: Active exploitation observed or vendor admission.
-
Multi-party decision: Severity escalation requires sign-off from ≥3 of 5 designated hardware security reviewers (independent of protocol governance).
-
Appeal process: Profile sponsors can challenge findings within 14 days. A second independent lab is commissioned for arbitration. If appeal succeeds, all penalties are reversed.
-
Sunset, not confiscation: Even at S3, WC holders retain 75% of value after haircut, redeemable after sunset period. This is not a bail-in; it is risk-adjusted repricing.
-
Transparency: All severity determinations, evidence summaries, and haircut calculations are published. Raw evidence is archived for independent verification.
Why this matters:
If Work Credits are to function as SoV, holders need confidence that downgrades are:
- Predictable: Based on published rules, not governance mood.
- Proportional: Haircuts match severity; minor issues don’t trigger major losses.
- Appealable: Errors can be corrected.
- Transparent: All decisions are auditable.
This framework transforms “haircut risk” from discretionary governance into priced, bounded uncertainty—the same standard applied to bond credit ratings.
14.6.3 PQ and cryptographic agility
Some Layer‑0 assumptions are about cryptography, not just silicon:
- Signature schemes burned into ROMs.
- Hash functions in hardware accelerators.
- RNG primitives.
Layer 0 insists that:
- Hardware and firmware expose enough configurability to migrate to post‑quantum or new primitives without throwing away entire fabs.
- Hardware profiles document cryptographic agility (e.g., “can switch hash from X to Y via firmware; signature scheme fixed”).
- Workload design avoids tying monetary finality to primitives that cannot be upgraded.
This is less romantic than “post‑quantum ready” marketing, but it’s what matters when you treat triad capacity as a long‑lived asset.
14.7 What Layer 0 exports to higher layers
Higher layers should not need to reason about dopant masks or EM probes. Layer 0 exports a small set of artifacts and APIs:
-
Hardware profiles (HIDs). Compact identifiers + dossiers that describe profile properties, sampling history, cryptographic capabilities, and current status (active, probation, deprecated).
-
Attestation receipts. Machine‑level statements (optionally SNARK‑wrapped) that bind:
- Device → hardware profile HID
- Firmware / prover binary → hash
- Measurement → nonce / time
Higher layers don’t parse raw attestation formats; they consume “this device claims profile H, and here is a proof it matches currently‑accepted measurements.”
-
Power and health telemetry. Streams or periodic snapshots of:
- Power usage for workloads.
- Failure rates, thermal anomalies, uptime patterns.
These feed VerifyPrice and operational dashboards.
-
Incident and status flags. Signals such as:
- “Profile H3 compromised; do not accept new work; see report R.”
- “Entropy tests for RNG type R1 failing; treat as suspect.”
With these exports, Layers 2–5 can:
- Treat Layer‑0 facts as inputs to workload assignment, fee schedules, and Work Credit policies.
- Avoid re‑implementing hardware security lore at every layer.
14.8 Stress tests for Layer 0
Finally, Layer 0 should be judged by explicit stress tests, just like the monetary layer.
Examples:
-
TEE backdoor discovered.
- Can the stack identify which profiles and Work Credits are affected?
- Can we continue triad work on non‑TEE profiles without massive downtime?
- Are incident reports and telemetry clear enough for treasuries to mark positions?
-
National‑scale power rationing.
- What fraction of triad capacity is lost if country C throttles datacenters?
- Do Work Credit markets reflect jurisdictional concentration ex ante?
-
Vendor coercion.
- A major chip vendor is forced to insert a kill switch for certain key types.
- Do lot sampling and open‑core alternatives exist?
- Can profiles be deprecated and replaced in a bounded time?
Layer 0 passes if:
- Failures are visible, localized, and actionable, not silent.
- The triad capacity as a whole remains sufficient and diversified enough that Work Credits and higher‑layer SoV instruments remain credible.
Layer 0, in short, is not about trusting hardware absolutely; it is about instrumenting hardware enough that the triad’s monetary story is not resting on hand‑waving. Verifiable Machines and Energy make “energy‑anchored Work Credits” and “verified compute as AI Money” more than slogans. They give higher layers something to stand on when the world puts real weight on Privacy, Proofs, and Compute.
From here, Part III climbs to Layer 1 – Reachability: if Layer 0 asks “can we trust what the machine does?”, Layer 1 asks “can we talk to it at all when someone is trying to turn the network off?”
Tip: hover a heading to reveal its permalink symbol for copying.