Relay Pool Orchestration
1. Purpose
Section titled “1. Purpose”The multi-relay fan-out specification defines what clients do: push encrypted blobs to 3 relays, pull from any survivor, track per-relay cursors independently. This document defines what happens behind the clients — how a pool of 3,000 relays serving 2,000,000 users is allocated, monitored, healed, and governed.
Three components form the orchestration model. Clients provide distributed detection — they are the first to know when a relay is unreachable. Surviving relays provide aggregation and verification — they collect client reports, run independent health checks, and escalate through a tiered warning system. HQ provides decision and action — it holds the global view, the provider credentials, the spawn authority, and the audit trail. It is the only component that can create or destroy infrastructure.
The design principle that governs everything here is the same one that governs 0k-sync itself: separation of concerns. Relays are dumb pipes. Clients are smart endpoints. HQ is the brain. No component exceeds its authority. No component touches user data.
2. The Fleet
Section titled “2. The Fleet”2.1 Scale
Section titled “2.1 Scale”| Component | Count | Role |
|---|---|---|
| Ephemeral relays | 1,500 | Short-term blob routing with TTL-based expiry |
| Blind replicas | 1,500 | Long-term encrypted archive for disaster recovery |
| Users | 2,000,000 | Each assigned 3 ephemeral relays + 3 blind replicas |
| Avg. users per relay | ~1,333 | Per ephemeral relay and per blind replica |
2.2 Provider Diversity
Section titled “2.2 Provider Diversity”Each user’s 3 relays are distributed across 3 independent hosting providers. No two of a user’s assigned relays share the same provider. This eliminates single-provider failure as a threat to any individual user’s redundancy.
User's relay assignment: Relay A → Cloud Provider 1 (EU region) Relay B → Cloud Provider 2 (EU region) Relay C → Cloud Provider 3 (EU region)The same principle applies to blind replicas. A user’s 3 blind replicas are spread across 3 providers. The complete encrypted archive survives any single provider failing — whether from outage, bankruptcy, government seizure, or physical disaster.
Why this matters for the threat model. A Class 3 adversary (institutional, law enforcement) issuing a subpoena to one provider obtains one relay’s encrypted blobs from one jurisdiction. The other two replicas sit with different providers in different jurisdictions, requiring separate legal processes. And even with all three — Ring 1 holds. The data is opaque.
2.3 Relay Identity and DNS
Section titled “2.3 Relay Identity and DNS”Relays are addressed by logical DNS names, not raw iroh NodeIds.
relay-eu-042.ephemeral.0ksync.net → NodeId (current instance)replica-ap-017.archive.0ksync.net → NodeId (current instance)The v3 invite format stores these logical names. When a relay dies and a replacement spawns with a new NodeId, the DNS record is updated. Clients resolve the name on their next connection attempt and reach the replacement transparently. No invite rotation, no user action, no re-pairing.
This is how every CDN, load balancer, and distributed system in the world solves endpoint migration. The abstraction layer between logical identity and physical instance is DNS.
3. Three-Layer Detection Model
Section titled “3. Three-Layer Detection Model”3.1 Layer 1 — Client Detection (Distributed)
Section titled “3.1 Layer 1 — Client Detection (Distributed)”Clients are the first to know when a relay is unreachable. The fan-out spec already handles the immediate impact: push fails over to a surviving relay, pull tries the next in preference order. The user’s sync continues uninterrupted on 2 relays.
What this layer adds: the client reports the failure to its surviving relays. A simple message — “I cannot reach relay-eu-042, unreachable since 14:32:01 UTC.” The client does not diagnose, does not retry heroically, does not attempt to fix anything. It reports and moves on. Detection is fully distributed across up to 1,333 clients per relay.
3.2 Layer 2 — Relay Aggregation and Verification (Regional)
Section titled “3.2 Layer 2 — Relay Aggregation and Verification (Regional)”Surviving relays collect client reports and aggregate them. When enough independent clients report the same relay dead, the surviving relay begins its own verification — direct health checks against the reported-dead relay’s /health endpoint.
This two-source model prevents both false positives and manipulation. Client reports alone could be triggered by regional network issues affecting a subset of users. A malicious client could spam false reports to trigger unnecessary spawns. The surviving relay’s independent verification is the confirmation. Client reports are the early warning system; relay-to-relay health checks are the evidence.
3.3 Layer 3 — HQ (Global Decision Authority)
Section titled “3.3 Layer 3 — HQ (Global Decision Authority)”HQ receives tiered alerts from surviving relays across the entire fleet. It has the global view: which relays are healthy, which are degraded, which providers are experiencing issues, what the current allocation map looks like. HQ makes the decision to spawn, selects the provider and region, provisions the instance, updates DNS, and logs every action.
HQ is the only component with provider API credentials. Relays never hold credentials to create or destroy infrastructure. This keeps the relay’s posture minimal: it routes blobs, it aggregates health reports, it forwards alerts. It does not act on the infrastructure itself.
4. Tiered Emergency Protocol
Section titled “4. Tiered Emergency Protocol”4.1 TIER 1 — YELLOW (Watch)
Section titled “4.1 TIER 1 — YELLOW (Watch)”Trigger: More than 10 unique clients report the same relay unreachable within a 2-minute window.
Action at relay: Surviving relay begins health checking the reported relay at 30-second intervals. Continues accumulating client reports. Forwards YELLOW alert to HQ with report count, first-seen timestamp, and client geographic distribution.
Action at HQ: Dashboard indicator. Logged. No infrastructure action. This tier captures transient blips, brief network rerouting, or regional connectivity issues that resolve themselves.
User impact: None. Fan-out spec handles failover transparently. Users are on 2 relays instead of 3 — reduced redundancy but full functionality.
4.2 TIER 2 — ORANGE (Degraded)
Section titled “4.2 TIER 2 — ORANGE (Degraded)”Trigger: Relay-to-relay health check fails 3 consecutive times (90 seconds of sustained failure). Client reports still accumulating from multiple geographic regions.
Action at relay: Forwards ORANGE alert to HQ with health check failure log, client report summary, and confirmation that the failure is not region-specific.
Action at HQ: Marks relay as degraded in the fleet registry. Begins pre-warming a replacement: selects the target provider and region based on the dead relay’s role in the allocation map, prepares the deployment configuration, but does not provision. The relay may still recover — a host reboot, a network restoration, a provider-side fix could bring it back.
User impact: None. Same as YELLOW. Users continue on surviving relays.
4.3 TIER 3 — RED (Confirmed Down)
Section titled “4.3 TIER 3 — RED (Confirmed Down)”Trigger: 5 minutes of continuous failure. All relay-to-relay health checks failing. Client report count growing. No recovery observed. Multiple surviving relays independently confirming the same failure (cross-verification).
Action at HQ:
- Emergency spawn triggered. New relay instance provisioned on the designated provider via API.
- New relay starts, generates its iroh NodeId, begins accepting connections.
- DNS record updated: the logical name (e.g.
relay-eu-042.ephemeral.0ksync.net) resolves to the new NodeId. - HQ notifies surviving relays that a replacement is live.
- Full audit log entry: timestamp, dead relay identity, failure duration, report count, replacement identity, provider, region, DNS propagation confirmation.
User impact: Minimal. During the 5-minute detection window plus spawn time (typically 30-90 seconds for a container, 2-3 minutes for a VM), users operate on 2 relays. Once DNS propagates (seconds to minutes depending on TTL), clients connecting to the logical name reach the replacement. Normal 3-relay redundancy restored.
Data recovery for the replacement: The new relay starts empty. For ephemeral relays, this is fine — the blobs it missed have TTL-based expiry anyway, and other relays hold copies (fan-out). For blind replicas, the replacement needs to catch up. Since clients push to all 3 replicas via fan-out, the replacement will receive all new blobs from the moment it’s live. Historical data that existed only on the dead replica is either still on the other 2 replicas (fan-out guarantee) or can be repushed from the desktop vault. The fan-out spec ensures no single blind replica is the sole holder of any blob.
4.4 TIER 4 — BLACK (Provider Down)
Section titled “4.4 TIER 4 — BLACK (Provider Down)”Trigger: Multiple relays on the same provider hit RED simultaneously. HQ detects a correlated failure pattern — not one relay dead, but an entire provider’s fleet.
Action at HQ:
- Provider-level incident declared. All relays on the affected provider marked RED simultaneously.
- Replacement relays spawned on surviving providers. HQ redistributes across healthy providers to maintain diversity. If Provider 1 is down and Providers 2 and 3 are healthy, replacements split across 2 and 3 (or a 4th provider if available).
- Batch DNS update for all affected logical names.
- Incident logged with full scope: affected relay count, affected user count, replacement mapping, provider identity.
User impact: Every user had at most 1 relay on the dead provider (provider diversity rule). So every affected user is already operating on 2 healthy relays from 2 healthy providers. No user loses more than one relay simultaneously from a provider-level failure. Once replacements spawn, 3-relay redundancy restores.
This is the scenario provider diversity was designed for. Without it, a user with all 3 relays on the same provider loses everything. With it, the worst case is temporary reduction from 3-relay to 2-relay redundancy — which the fan-out spec handles transparently.
5. HQ — The Fortress
Section titled “5. HQ — The Fortress”5.1 What HQ Is
Section titled “5.1 What HQ Is”HQ is the global control plane for the relay fleet. It is the single authority that can create infrastructure, destroy infrastructure, update DNS, and modify the relay allocation map. It holds:
- API credentials for all hosting providers
- DNS management credentials
- The global fleet registry: every relay’s logical name, current NodeId, provider, region, status, and allocation map
- The emergency protocol state machine: which relays are YELLOW, ORANGE, RED, BLACK
- The complete audit log of every alert, decision, and action
5.2 What HQ Is Not
Section titled “5.2 What HQ Is Not”HQ never touches user data. It never receives a blob, never holds a GroupSecret, never participates in the sync protocol, and has zero cryptographic dependencies on user key material. It cannot read, decrypt, or inspect any payload on any relay.
A fully compromised HQ gives the attacker the ability to:
- Spawn rogue relays (denial of service, potential traffic interception — but Ring 1 means intercepted blobs are opaque)
- Kill healthy relays (denial of service)
- Redirect DNS to attacker-controlled servers (traffic interception — but again, Ring 1 holds, and Ring 2 hybrid PQ transport would detect unknown endpoints via Noise XX mutual authentication)
- Read the fleet registry (operational metadata: which relays exist, how many users, which providers — no user data)
A fully compromised HQ cannot:
- Read any user’s plaintext data
- Extract any GroupSecret
- Decrypt any blob on any relay or blind replica
- Impersonate a legitimate relay to a client (Noise XX mutual authentication prevents this once Ring 2 is implemented)
Worst case: service disruption. Never data exposure. The zero-knowledge guarantee is architecturally independent of HQ.
5.3 HQ Resilience
Section titled “5.3 HQ Resilience”HQ itself must survive provider failures. It is the component responsible for healing provider failures — it cannot be a victim of the same failures it’s meant to fix.
Deployment model: HQ runs as a lightweight service across at minimum 2 providers in active-passive or active-active configuration. Its resource requirements are minimal — it processes alerts, calls provider APIs, updates DNS records. It does not route data, store blobs, or handle user connections. A small VM or even a serverless function with durable state is sufficient.
If HQ goes down: The relay fleet continues operating normally. Users sync, fan-out works, existing relays serve. The only capability lost is auto-healing — a dead relay will not be replaced until HQ recovers. Given that each user has 3 relays on 3 providers, the probability of a user losing all 3 relays while HQ is simultaneously down approaches zero.
HQ recovery: HQ’s state (fleet registry, allocation map, DNS configuration) is backed up to provider-independent storage. A replacement HQ instance loads the registry and resumes operations. No state reconstruction needed from relays — the registry is the source of truth.
5.4 Provider Credential Isolation
Section titled “5.4 Provider Credential Isolation”Each provider’s API credentials are scoped to relay management only. They cannot access other infrastructure, cannot modify billing, cannot access unrelated services. Credentials are rotated on a defined schedule and stored encrypted at rest within HQ’s secure configuration.
If a single provider’s credentials are compromised, the attacker can only affect relays on that provider — and provider diversity ensures no user depends solely on any single provider. HQ detects anomalous provider API activity (unexpected spawns, unexpected terminations) and alerts operations.
6. Relay Allocation Strategy
Section titled “6. Relay Allocation Strategy”6.1 Assignment Rules
Section titled “6.1 Assignment Rules”When a new user provisions (first device pairing), HQ assigns 3 ephemeral relays and 3 blind replicas according to:
- Provider diversity: No two relays from the same provider. No two blind replicas from the same provider.
- Geographic proximity: Prefer relays geographically close to the user for latency. The fan-out spec’s primary relay should be the nearest; secondaries can be further.
- Load balancing: Prefer relays with fewer assigned users. Avoid hot spots.
- Headroom: Prefer relays with available capacity for growth. Don’t assign to relays approaching resource limits.
The assignment is encoded in the v3 invite format as logical DNS names. The user’s device stores these names persistently and resolves them to NodeIds on each connection.
6.2 Rebalancing
Section titled “6.2 Rebalancing”Over time, user distribution may become uneven — some relays overloaded, others underutilised. HQ can trigger a rebalance:
- HQ selects users to migrate from an overloaded relay to an underloaded one.
- HQ spawns or identifies the target relay.
- HQ updates DNS: the user’s logical relay name now resolves to the new relay’s NodeId.
- On the client’s next connection, it reaches the new relay transparently.
- Historical data on the old relay expires via TTL (ephemeral) or remains as a redundant copy (blind replica).
Rebalancing is a background operation. It does not interrupt active sessions. It does not require user action or re-pairing.
6.3 Decommissioning
Section titled “6.3 Decommissioning”When a relay is scheduled for decommission (end of life, provider migration, cost optimisation):
- HQ marks the relay as draining — no new user assignments.
- HQ gradually migrates users to replacement relays via DNS updates.
- Once the relay has zero assigned users and its TTL window has expired (all ephemeral blobs deleted), HQ terminates the instance.
- For blind replicas: decommission is slower. HQ ensures all users assigned to the draining replica have been reassigned and their new replicas have received the full archive via normal fan-out before termination.
7. Client-Side Behaviour
Section titled “7. Client-Side Behaviour”7.1 Dead Relay Reporting
Section titled “7.1 Dead Relay Reporting”When a client fails to connect to one of its 3 assigned relays, it:
- Fails over to a surviving relay (existing fan-out spec behaviour).
- Sends a relay health report to the surviving relay: dead relay’s logical name, timestamp of first failure, client’s approximate region.
- Continues normal sync on surviving relays.
- Periodically retries the dead relay in the background (exponential backoff).
- If the dead relay’s DNS resolves to a new NodeId (replacement spawned), the client connects to the replacement and resumes 3-relay operation.
7.2 What the Client Does Not Do
Section titled “7.2 What the Client Does Not Do”The client does not decide which relay should replace the dead one. The client does not call provider APIs. The client does not modify its own relay assignments. The client does not coordinate with other clients about the failure. It reports, it fails over, it retries. HQ handles everything else.
7.3 DNS TTL Considerations
Section titled “7.3 DNS TTL Considerations”For relay DNS records, TTL should be short — 60 seconds or less. This ensures that when HQ updates a DNS record (relay replacement, rebalancing, decommission), clients pick up the change within a minute. The cost is more frequent DNS lookups, which is negligible for a sync application that connects periodically rather than continuously.
8. Blind Replica Specific Considerations
Section titled “8. Blind Replica Specific Considerations”8.1 Data Continuity
Section titled “8.1 Data Continuity”Ephemeral relays are stateless from a data perspective — blobs expire, new ones arrive. A replacement ephemeral relay starts empty and that’s fine.
Blind replicas hold the complete encrypted archive. A replacement blind replica starts empty and must catch up. The fan-out spec guarantees this naturally: since every client pushes every blob to all 3 blind replicas, the replacement begins receiving new blobs immediately. Historical blobs that pre-date the replacement exist on the other 2 surviving replicas.
For complete archive restoration, the desktop vault (or any device with the full history) can repush historical blobs to the replacement replica. This is a background operation that does not affect normal sync. The user does not need to be aware it is happening.
8.2 Storage Growth
Section titled “8.2 Storage Growth”At scale, blind replica storage is the primary infrastructure cost. With 2,000,000 users and an average of 500 MB encrypted archive per user per year:
| Metric | Value |
|---|---|
| Total archive size (year 1) | ~1 PB |
| Per blind replica (1,500 replicas, 3x replication) | ~2 TB average |
| Storage cost (cloud block storage) | ~variable per replica |
| Total storage cost (1,500 replicas) | Scales linearly |
This scales linearly with users and time. Tiered storage (recent blobs on fast SSD, older blobs on cold storage) can reduce costs as the archive ages. HQ manages storage tiers as part of fleet operations.
8.3 Provider Seizure Recovery
Section titled “8.3 Provider Seizure Recovery”If a provider seizes a blind replica’s physical hardware, the data on it is protected by three rings (documented in the Blind Replica Security summary). The operational concern is availability: HQ spawns a replacement on a different provider, DNS updates, and the replacement catches up from fan-out. The seized replica’s data is useless to the seizing party — encrypted blobs with no key material present.
9. Interaction with the Threat Model
Section titled “9. Interaction with the Threat Model”This orchestration layer adds a new component (HQ) and a new communication channel (relay health reports) to the threat model surface.
9.1 New Attack Surfaces
Section titled “9.1 New Attack Surfaces”HQ compromise. Covered in Section 5.2. Worst case is service disruption. Data exposure is architecturally impossible.
Relay health report spoofing. A malicious client could flood surviving relays with false reports that a healthy relay is dead, attempting to trigger unnecessary spawns or waste resources. Mitigated by relay-to-relay verification — the surviving relay independently health-checks the reported relay before escalating. A healthy relay responds to health checks regardless of what clients claim.
DNS poisoning. If an attacker compromises DNS, they could redirect a relay’s logical name to an attacker-controlled server. Clients connecting to the rogue server would push encrypted blobs to it (useless — Ring 1) and potentially receive crafted responses. Ring 2 (Noise XX mutual authentication) prevents this entirely once implemented — the client verifies the relay’s identity during the handshake. Pre-Ring 2, DNS poisoning is a realistic attack that results in sync disruption but not data exposure.
HQ credentials leak. Provider API credentials leaked from HQ could allow an attacker to spawn rogue relays, terminate healthy relays, or modify DNS. Mitigated by credential rotation, scoped permissions, anomaly detection on provider API activity, and the fact that even rogue relays cannot access user data.
9.2 What This Layer Does Not Change
Section titled “9.2 What This Layer Does Not Change”The zero-knowledge guarantee is independent of the orchestration layer. HQ, relay health reports, DNS, provider APIs — none of these touch user data or key material. The three-ring blind replica model holds regardless of orchestration state. The application-layer platform security holds regardless of orchestration state. The orchestration layer is purely operational infrastructure.
10. The Final Boss Layer — Decentralised Cold Archive
Section titled “10. The Final Boss Layer — Decentralised Cold Archive”10.1 Why It Exists
Section titled “10.1 Why It Exists”The managed blind replicas are protected by three cryptographic rings and spread across three providers in three jurisdictions. This defeats every adversary class up to and including nation-state. But “defeats” assumes the infrastructure exists. A coordinated seizure across three providers — unlikely but not impossible for a determined state actor — removes the managed replicas from play. HQ can respawn them, but there is a window. The decentralised layer eliminates that window.
The decentralised cold archive is the unkillable floor beneath all managed infrastructure. Encrypted blobs are written to a decentralised storage network where no single entity controls the data, no single jurisdiction governs it, and no single point of failure can destroy it. There is no company to subpoena. No datacenter to raid. No server to seize. No DNS to poison. No provider to go bankrupt. The data simply exists, distributed across hundreds of independent storage nodes that don’t know each other and don’t know you.
This is the layer where the threat model stops being about cryptography and starts being about physics. The data is everywhere and nowhere. The only way to destroy it is to destroy the network — every node, simultaneously, globally.
10.2 Why 0k-Sync Data Is Perfect For It
Section titled “10.2 Why 0k-Sync Data Is Perfect For It”Decentralised storage networks have tradeoffs: higher latency, variable retrieval speeds, complex deal-making. These tradeoffs are irrelevant to the cold archive role because:
Write-once, read-rarely. Blobs are written to the decentralised network continuously as a background process. They are only read in a disaster recovery scenario where all managed blind replicas are simultaneously unavailable. This is the exact access pattern decentralised storage is optimised for — high durability, low retrieval frequency.
Already encrypted, already opaque. The blobs hitting the decentralised network are Ring 1 ciphertext. The storage nodes cannot read them, cannot determine their type, cannot associate them with a user, cannot distinguish journal entries from financial transactions from receipts. The zero-knowledge guarantee extends to the decentralised layer without modification.
Content-addressable. 0k-sync already uses BLAKE3 hashing for content addressing. Decentralised networks use content addressing natively (IPFS CIDs, Filecoin piece CIDs). The mapping is natural — each blob’s BLAKE3 hash can derive or map to its decentralised storage identifier.
Fixed-size blobs, append-only. No updates, no deletes, no random access. Just “store this new blob” and, rarely, “give me all blobs for this group.” This is the simplest possible storage contract.
10.3 The Complete Archive Hierarchy
Section titled “10.3 The Complete Archive Hierarchy”┌──────────────────────────────────────────────────────┐│ Layer 1: Desktop Vault (instant, local) ││ Full filing cabinet, all 6 drawers ││ Recovery: immediate │├──────────────────────────────────────────────────────┤│ Layer 2: Other Devices via Fan-Out ││ Phones hold full cabinet (subscribers) or 1 drawer ││ Recovery: pair from surviving device │├──────────────────────────────────────────────────────┤│ Layer 3: Managed Blind Replicas (minutes) ││ 3 replicas, 3 providers, 3 jurisdictions ││ 3-ring cryptographic protection ││ HQ-managed, auto-healing, tiered emergency protocol ││ Recovery: pair new desktop, pull full archive │├──────────────────────────────────────────────────────┤│ Layer 4: Decentralised Cold Archive (hours) ││ Hundreds of independent storage nodes ││ No entity to subpoena, no datacenter to raid ││ No HQ dependency for data durability ││ Recovery: pull from network, slower but unkillable │└──────────────────────────────────────────────────────┘Normal recovery uses Layers 1-3. Layer 4 is the final fallback — the disaster behind the disaster. A user only reaches Layer 4 if all devices are lost AND all 3 managed blind replicas are simultaneously unavailable. At that point, recovery takes hours instead of minutes, but it succeeds. The data exists. Nothing can erase it.
10.4 Network Selection: Iagon on Cardano
Section titled “10.4 Network Selection: Iagon on Cardano”The decentralised cold archive uses Iagon — a decentralised storage economy built on the Cardano blockchain. This is not a vendor selection. It is an architectural alignment. The entire VardKista payment and verification stack runs through Cardano. Iagon extends that to storage.
Why Iagon.
Iagon shards, encrypts, and distributes files across a decentralised node network. Data sovereignty is its core principle — users control where their data is stored and how it is accessed. The network is built on Cardano for the same reasons VardKista uses Cardano: security, decentralisation, cost-effectiveness, and regulatory alignment. Iagon supports GDPR-compliant regional storage allocation, allowing encrypted blobs to be stored on EU-only nodes for European users.
The Cardano-native architecture creates a unified trust layer. Payment receipts, storage entitlements, storage proofs, and governance all live on the same chain. Every claim is verifiable. Every receipt is permanent.
Why self-operated nodes.
Self-operated Iagon storage nodes run on the same provider-diverse infrastructure that hosts the managed blind replicas. This eliminates dependency on the Iagon network’s economic health. If the wider network thrives, VardKista benefits from additional redundancy across hundreds of independent nodes. If the wider network contracts, self-operated nodes still hold every user’s encrypted archive.
The risk model inverts. The question is not “will Iagon survive?” The answer is “it doesn’t matter.” Self-operated nodes guarantee data persistence regardless of network economics. The decentralised network is a bonus distribution layer. Self-operated nodes are the guarantee.
Node requirements. Each Iagon node requires: 900 GB minimum storage, 4 GB RAM, 20 Mbps read/write speeds, 90% uptime. These requirements are trivially satisfied by the existing cloud infrastructure already running ephemeral relays and blind replicas. The Iagon node CLI runs as an additional container on the same hosts — marginal resource cost only.
Node economics. Staked IAG is locked while the node operates and cannot be withdrawn until 3 months after node retirement. While operating, nodes earn IAG from other network users storing data on them. Over time, earned IAG offsets or exceeds the staked amount, making the final boss layer self-funding.
10.5 Infrastructure Layout
Section titled “10.5 Infrastructure Layout”Cloud Provider 1 (EU region)├── Ephemeral relays (pool)├── Managed blind replicas (pool)└── Iagon node: 1TB committed, staked
Cloud Provider 2 (EU region)├── Ephemeral relays (pool)├── Managed blind replicas (pool)└── Iagon node: 1TB committed, staked
Cloud Provider 3 (EU region)├── Ephemeral relays (pool)├── Managed blind replicas (pool)└── Iagon node: 1TB committed, stakedEach provider hosts the full stack: real-time sync (ephemeral relays), durable archive (managed blind replicas), and unkillable archive (Iagon nodes). Provider diversity applies to all three layers simultaneously. A provider failure loses one of each — the remaining two of each continue serving.
10.6 Cardano Payment → Storage Entitlement
Section titled “10.6 Cardano Payment → Storage Entitlement”The mechanism connecting user payments to decentralised storage entitlement is already built. When a user subscribes:
- Stripe processes the payment.
- A Cardano transaction is created containing the payment receipt hash.
- This transaction now additionally references the user’s Iagon storage allocation.
- The transaction is on-chain, permanent, and verifiable by any party.
The user’s device holds the transaction hash. That hash proves payment and storage entitlement simultaneously. No account database. No login system. No “trust me bro.” The Cardano ledger is the source of truth.
In the disaster recovery scenario — user loses everything, all managed infrastructure is unavailable — the user provisions a new device, enters their passphrase, and proves storage entitlement via their Cardano transaction. The Iagon network serves the encrypted blobs. The device decrypts locally. Recovery succeeds without any managed infrastructure participating.
This is the only consumer sync product where the payment receipt and the storage proof are the same cryptographic artifact, verifiable on a public ledger, controlled by no single entity.
10.7 Write Path
Section titled “10.7 Write Path”The decentralised archive write is a background process, not on the critical sync path. It does not affect sync latency, push/pull performance, or user experience.
Client pushes blob │ ├──→ Relay A (ephemeral, real-time) ── fan-out spec ├──→ Relay B (ephemeral, real-time) ── fan-out spec ├──→ Relay C (ephemeral, real-time) ── fan-out spec ├──→ Blind Replica 1 (managed, durable) ── fan-out spec ├──→ Blind Replica 2 (managed, durable) ── fan-out spec ├──→ Blind Replica 3 (managed, durable) ── fan-out spec │ └──→ Iagon nodes (cold, unkillable) ── background batch ├── Self-operated node (Provider 1) ├── Self-operated node (Provider 2) ├── Self-operated node (Provider 3) └── Wider Iagon network (additional redundancy)The Iagon write can be batched — accumulate blobs locally or on a managed blind replica, then push them to the Iagon nodes on a schedule (hourly, daily). This reduces transaction overhead and smooths out network variability. The lag between managed replicas (real-time) and the Iagon archive (batched) is acceptable because Layer 4 is only accessed when Layers 1-3 are all unavailable.
Who pushes to Iagon? Each managed blind replica has a sidecar process that pushes its blobs to the co-located Iagon node. Blind replica on Provider 1 pushes to the Iagon node on Provider 1. Blind replica on Provider 2 pushes to the Iagon node on Provider 2. No cross-provider traffic for the write path. HQ does not participate — the data path stays away from the control plane.
Three blind replicas pushing to three Iagon nodes means natural redundancy. If one sidecar fails, the other two push the same blobs to their respective nodes. Iagon’s sharding distributes the data across the wider network from there. Deduplication is handled at the network level via content addressing.
10.8 Retrieval Path
Section titled “10.8 Retrieval Path”Retrieval from the Iagon archive is the disaster recovery path of last resort:
- User has lost all devices.
- All 3 managed blind replicas are unavailable (provider seizure, coordinated outage, or HQ unable to respawn in time).
- User provisions a new desktop and enters their passphrase.
- VardKista derives the Cardano transaction reference from the passphrase-derived key material.
- The on-chain payment receipt proves storage entitlement — no account, no login, no managed service needed.
- VardKista queries the Iagon network for the user’s encrypted blob archive.
- Blobs are retrieved from self-operated nodes and/or the wider Iagon network.
- Desktop decrypts locally using GroupSecrets derived from the passphrase.
- Desktop rebuilds local databases and resumes as vault.
Performance expectations: Retrieval from the Iagon network is slower than managed blind replicas. For a 500 MB archive, expect minutes to hours depending on network conditions and node availability. For multi-GB archives, hours. This is acceptable because this path is only used when every other recovery option has failed. The user is already in a “everything went wrong” scenario — waiting longer to recover years of data is an acceptable tradeoff.
Self-operated node advantage. Since self-operated Iagon nodes run on known infrastructure, retrieval can prioritise these nodes for faster response. The client can connect directly to self-operated nodes first, falling back to the wider network only if those nodes are also unavailable. This gives near-managed-infrastructure retrieval speed for the common case, with true decentralised fallback for the extreme case.
10.9 Threat Model Impact
Section titled “10.9 Threat Model Impact”The Iagon archive fundamentally changes the availability guarantee against the most extreme adversary scenarios:
Class 3 — Institutional (law enforcement, intelligence agency). Without Layer 4: coordinated seizure across 3 providers in 3 jurisdictions could temporarily or permanently remove all managed blind replicas. HQ respawns replacements, but the window exists and the historical archive on the seized hardware must be repushed from client devices. With Layer 4: seizure of all managed replicas is a service disruption, not a data loss event. The complete encrypted archive exists on self-operated Iagon nodes and the wider Iagon network. Even if the seized providers also host self-operated Iagon nodes, the data is sharded across the wider decentralised network. Recovery takes longer but succeeds.
Class 4 — Nation-state / quantum. Without Layer 4: a state actor with sufficient resources could theoretically pressure or compromise all managed providers and HQ simultaneously, rendering managed infrastructure unavailable. With Layer 4: the encrypted archive is distributed across the Iagon network — a decentralised system operating across multiple jurisdictions with no central authority. The state actor would need to simultaneously shut down every Iagon node globally, including nodes operated by independent third parties with no connection to the operator. Practically impossible.
Iagon network contraction. If the wider Iagon network contracts economically (IAG token value crashes, third-party node operators leave), self-operated nodes still hold the data. Layer 4 degrades from “decentralised across hundreds of nodes” to “replicated across 3 self-operated nodes on 3 providers” — which is functionally identical to the managed blind replica layer. The final boss layer becomes a redundant copy of Layer 3 rather than an independent layer, but no data is lost.
Iagon network failure (total). If Iagon ceases to exist as a protocol, the self-operated nodes still hold the encrypted blobs on disk. The data exists on the hardware regardless of whether the Iagon software is running. Migration to an alternative decentralised network (Filecoin, Arweave, or whatever emerges) is a background operation — same opaque blobs, different storage backend. The Cardano payment receipt remains valid on-chain forever regardless of Iagon’s status.
10.10 Cost
Section titled “10.10 Cost”The final boss layer is effectively pre-funded.
| Item | Cost | Notes |
|---|---|---|
| IAG stake (3+ nodes) | Existing IAG holdings | No purchase required. |
| Additional hardware | $0 | Runs on existing cloud infrastructure |
| Bandwidth | Marginal | Sidecar pushes from co-located blind replicas. No cross-provider traffic. |
| Ongoing IAG earnings | Positive | Nodes earn IAG from external storage demand, offsetting and eventually exceeding stake |
Against revenue of 2,000,000 users at $9.99/month, the total incremental cost of the final boss layer rounds to zero. The IAG stake is a deposit, not an expense — it returns when nodes retire. The infrastructure is shared with existing services. The only new software component is the sidecar pushing blobs from blind replicas to Iagon nodes.
10.11 The Narrative
Section titled “10.11 The Narrative”This is the layer that makes VardKista’s security story categorically different from every competitor. iCloud can be subpoenaed. Google Drive complies with government orders. Even end-to-end encrypted services like Signal store data on infrastructure that can be seized.
VardKista’s filing cabinet has a final drawer that exists nowhere and everywhere. The data is encrypted (Ring 1, quantum-resistant), stored on a decentralised network (no entity to compel), and retrievable only by someone who knows the passphrase. There is no backdoor because there is no door — just encrypted blobs scattered across a peer-to-peer network that no government, corporation, or attacker controls.
And the entire stack is Cardano-native:
Payment → Stripe → Cardano transaction hash (BUILT)Receipt proof → Cardano on-chain metadata (BUILT)Storage deal → Iagon on Cardano (NEW LINK)Storage proof → Iagon verification on CardanoNode operation → Self-operated, IAG-staked from existing holdingsGovernance → Democracy Solutions on Cardano (EXISTING)Audit trail → Cardano ledger (permanent, public, verifiable)No off-chain trust anywhere. Every payment is verifiable. Every storage entitlement is provable. Every receipt is permanent. The user’s passphrase unlocks the data. The Cardano ledger proves the entitlement. The Iagon network holds the blobs. Self-operated nodes guarantee persistence.
This is not a marketing claim. It is an architectural property. The data cannot be destroyed because there is no central point to destroy. The data cannot be read because the keys do not exist on the network. The data cannot be censored because there is no authority to issue the order. The entitlement cannot be revoked because it lives on a public ledger.
That’s the final boss. You can’t beat it because there’s nothing to fight.
11. Launch Topology — Two Relay Groups
Section titled “11. Launch Topology — Two Relay Groups”11.1 Starting Configuration
Section titled “11.1 Starting Configuration”The fleet launches with two geographic relay groups — six ephemeral relays total. Each group provides full 3-relay redundancy with provider diversity. Users are assigned to a group based on geographic proximity.
EU GROUP (EU / Africa / Middle East) relay-eu-001.ephemeral.0ksync.net → Cloud Provider 1 (EU) relay-eu-002.ephemeral.0ksync.net → Cloud Provider 2 (EU) relay-eu-003.ephemeral.0ksync.net → Cloud Provider 3 (EU)
APAC GROUP (APAC / Oceania / India) relay-ap-001.ephemeral.0ksync.net → Cloud Provider 1 (APAC) relay-ap-002.ephemeral.0ksync.net → Cloud Provider 2 (APAC) relay-ap-003.ephemeral.0ksync.net → Cloud Provider 3 (APAC)11.2 Full Stack Per Region
Section titled “11.2 Full Stack Per Region”Each region runs the complete infrastructure stack on three providers. Every component co-located on existing hosts — no additional hardware required.
EU REGION (x3 providers): Per provider: 1 ephemeral relay (real-time sync, TTL-based expiry) 1 blind replica (managed encrypted archive) 1 Iagon node (decentralised cold archive) ───────────────────── 3 providers x 3 components = 9 containers
APAC REGION (x3 providers): Per provider: 1 ephemeral relay 1 blind replica 1 Iagon node ───────────────────── 3 providers x 3 components = 9 containers
TOTAL: 18 containers across 6 provider instances11.3 Capacity
Section titled “11.3 Capacity”With tuned relay configuration:
| Parameter | Value |
|---|---|
max_concurrent_sessions | 10,000 |
global_requests_per_second | 10,000 |
max_group_storage | 1 GB |
groups_per_relay | 10,000 |
max_db_size | 100 GB |
Per-relay capacity analysis at the VardKista usage pattern (~11 pushes/day, ~25 messages/day per user):
| Constraint | Capacity per relay |
|---|---|
| Concurrent sessions | ~7.7M (not bottleneck) |
| Global req/s (peak-adjusted) | ~4M |
| Bandwidth (20 TB/mo) | ~6M |
| SQLite writes | Not bottleneck |
| Storage (tuned) | ~1.3M |
Each relay comfortably serves 500K-1M users. Two groups provide total capacity of 1-2M users before a third group is needed. The 2M user target in the at-scale fleet model (Section 2) is achievable with this launch topology alone.
11.4 User Assignment
Section titled “11.4 User Assignment”Users are assigned to a relay group at subscription time based on approximate geographic region. The assignment is encoded in the v3 invite as three logical DNS names from the assigned group.
| User region | Assigned group | Rationale |
|---|---|---|
| Europe | EU | Lowest latency to EU nodes |
| Africa | EU | Nearest available group |
| Middle East | EU | Nearest available group |
| Asia Pacific | APAC | Lowest latency to APAC nodes |
| Oceania | APAC | Geographic proximity |
| India | APAC | Geographic proximity |
| Americas | EU (interim) | Until US group is added |
Users in the Americas are assigned to the EU group at launch — higher latency but fully functional. The sync protocol is latency-tolerant (periodic sync, not real-time streaming). A US East group is the natural third group when American user density justifies it.
11.5 Iagon Node Allocation
Section titled “11.5 Iagon Node Allocation”IAG holdings are distributed across all 6 Iagon nodes:
| Node | Location | Storage | Stake (approx.) |
|---|---|---|---|
| iagon-eu-001 | Cloud Provider 1 (EU) | 1 TB | ~3,000 IAG |
| iagon-eu-002 | Cloud Provider 2 (EU) | 1 TB | ~3,000 IAG |
| iagon-eu-003 | Cloud Provider 3 (EU) | 1 TB | ~3,000 IAG |
| iagon-ap-001 | Cloud Provider 1 (APAC) | 1 TB | ~3,000 IAG |
| iagon-ap-002 | Cloud Provider 2 (APAC) | 1 TB | ~3,000 IAG |
| iagon-ap-003 | Cloud Provider 3 (APAC) | 1 TB | ~3,000 IAG |
| Total | 6 TB | ~18,000 IAG |
Remaining IAG held in reserve for staking adjustments as IAG token price fluctuates or storage commitments increase.
11.6 Estimated Monthly Cost
Section titled “11.6 Estimated Monthly Cost”| Component | Instances | Est. cost per instance | Monthly total |
|---|---|---|---|
| VPS (cloud compute) | 6 | ~€35 | ~€210 |
| Additional storage volumes | 6 | ~€20 | ~€120 |
| DNS | — | Free tier | €0 |
| IAG stake | — | Deposit (not expense) | €0 |
| Total infrastructure | ~€330/month |
Against 10,000 subscribers at $9.99/month ($99,900/month revenue), infrastructure cost is 0.33%. Against 100,000 subscribers ($999,000/month), it’s 0.03%. Against 2,000,000 subscribers ($19.98M/month), it’s 0.002%.
The infrastructure cost is irrelevant at every scale milestone. Every dollar saved on infrastructure is a dollar not spent on security and resilience — the wrong tradeoff for VardKista.
11.7 Growth Path
Section titled “11.7 Growth Path”LAUNCH: 2 groups (EU + APAC) → 6 relays, 18 containers Capacity: ~1-2M users
MILESTONE 1: Add US East group → 9 relays, 27 containers Trigger: >10K US subscribers or latency complaints Capacity: ~3M users
MILESTONE 2: Add South America group → 12 relays, 36 containers Trigger: Latin America user growth Capacity: ~4M users
MILESTONE 3: Split overloaded groups → 15+ relays Trigger: any relay approaching 500K users Action: subdivide group, migrate users via DNSNew groups are added by region, not by splitting existing groups. Each new group adds 9 containers (3 relays + 3 blind replicas + 3 Iagon nodes) on 3 providers. HQ manages the allocation map and DNS. Users never notice the expansion — their relay DNS names continue resolving to the same nodes.
The at-scale fleet model (1,500 ephemeral relays + 1,500 blind replicas) is the long-term steady state. The launch topology is the first step on that path, sized to handle the entire target market with headroom.
12. Business Continuity — The Sealed Envelope
Section titled “12. Business Continuity — The Sealed Envelope”12.1 The Bus Factor
Section titled “12.1 The Bus Factor”VardKista’s infrastructure is designed to run without intervention. Relays serve, blind replicas archive, Iagon nodes persist, HQ auto-heals. But the person who built the system, who holds the credentials, who understands the architecture — that is a single point of failure no amount of cryptography can solve.
Business continuity is not a technical problem. It is a human problem. If the lead engineer is incapacitated, the infrastructure continues running on autopilot. But autopilot has limits. Provider invoices need paying. Credentials expire. Certificates renew. Edge cases arise. Someone needs the authority and the knowledge to keep the lights on.
12.2 Three-Tier HQ Survival
Section titled “12.2 Three-Tier HQ Survival”Tier 1 — Production (active). Two HQ instances on two different providers, active/active or active/passive. Live replication of the fleet registry between them. If one dies, the other continues. HQ auto-heals itself the same way it auto-heals relays.
Tier 2 — Encrypted snapshots (cloud backup). Full encrypted instance images of HQ, stored on a third provider not shared with production. Automated nightly and after every fleet change. Recovery: spin up the snapshot on any provider, boot it, HQ is operational within minutes. The snapshot IS HQ — it contains the fleet registry, allocation map, provider credentials, DNS configuration, and emergency protocol state.
Tier 3 — Recovery machines (cold standby). Two physical machines, fully loaded, tested, encrypted at rest. Kept powered down at separate trusted locations. Each machine contains:
- Complete HQ state (fleet registry, credentials, configuration)
- Full AI CLI instance with entire project context loaded
- All specifications, runbooks, architecture documentation
- Provider API access
- Wallet access for IAG stake management
- DNS management credentials
- Iagon node administration tools
- Legal authority documentation
Recovery: power on, decrypt with passphrase from sealed envelope, go live.
12.3 The AI CLI Recovery Instance
Section titled “12.3 The AI CLI Recovery Instance”Each recovery machine runs a fully configured AI CLI instance. This is not a convenience — it is the primary recovery interface. The AI CLI instance contains:
Project knowledge:
- Every executive summary (threat model, blind replica security, relay pool orchestration)
- Every specification (0k-sync protocol, multi-relay fan-out, chaos testing)
- Every audit report and remediation status
- Architecture decisions and their rationale
- The complete documentation library
Operational context:
- Current fleet topology (which relays, which providers, which regions)
- Current user assignment map
- Current Iagon node status and IAG stake positions
- Provider account details and access procedures
- DNS record structure and management procedures
- Monitoring endpoints and what healthy looks like
Recovery runbooks as conversational context:
- How to verify the fleet is healthy
- How to handle each tier of the emergency protocol
- How to spawn a replacement relay
- How to update DNS records
- How to manage Iagon nodes
- How to pay provider invoices
- How to rotate credentials
- How to contact relevant parties
The trusted actor’s experience:
1. Receive sealed envelope from secure storage2. Power on recovery machine3. Enter decryption passphrase from envelope4. Open AI CLI (pre-configured, project loaded)5. Describe the situation in plain language6. AI CLI guides them through diagnostics and actions7. The system tells them what's healthy, what needs attention, and exactly what commands to runThe trusted actor does not need to understand 0k-sync architecture, Rust, Cardano, Iagon, or relay orchestration. They need to be technically competent (comfortable with a terminal) and authorised (legal standing to act for the operator). The AI CLI bridges the knowledge gap between a competent operator and a system architect.
12.4 Sealed Envelope Protocol
Section titled “12.4 Sealed Envelope Protocol”Two sealed envelopes, stored at two separate trusted locations with two separate trusted actors. Each envelope contains:
ENVELOPE CONTENTS: 1. Recovery machine decryption passphrase 2. Recovery machine physical location 3. One-page summary: what VardKista is, what this machine does 4. Legal authority documentation (power of attorney for operations) 5. Contact list: relevant partners, legal counsel, accountant 6. Provider account recovery procedures (2FA backup codes) 7. Wallet recovery (seed phrase for IAG management wallet) 8. Instructions: "Open terminal. Launch AI CLI. Describe your situation."The envelope is updated whenever material changes occur — new providers, new credentials, new trusted actors. The recovery machines are tested periodically to ensure they boot, decrypt, and the AI CLI instance is current.
12.5 Drift Management — Periodic Context Refresh
Section titled “12.5 Drift Management — Periodic Context Refresh”The AI CLI recovery instance is only as good as its last update. If the fleet architecture changes significantly and the cold standby machine still holds stale context, the recovery guidance may be incorrect or dangerous.
Procedure: every 6 months (January and July), or after any material infrastructure change:
- Boot each recovery machine.
- Decrypt and log in.
- Pull the latest repository, specifications, and documentation.
- Update the AI CLI project context with current fleet state.
- Verify the instance can answer basic operational questions correctly: “What relays are running? Which providers? How do I check fleet health?”
- Update provider credentials if rotated since last refresh.
- Update the sealed envelope if any credentials, contacts, or procedures have changed.
- Power down and re-secure.
Material infrastructure changes that trigger an immediate refresh:
- New relay group added (new region, new providers)
- Provider changed
- Credential rotation on any provider or DNS account
- Iagon node changes (new stake, new storage commitment)
- Legal changes (new trusted actor, updated power of attorney)
- Wallet changes (new address, new seed)
The refresh is logged with date and summary of changes. The sealed envelope always states the date of last refresh so the trusted actor knows how current the recovery machine’s context is.
12.6 Dead Man’s Switch — Invoice Runway
Section titled “12.6 Dead Man’s Switch — Invoice Runway”If the lead engineer is incapacitated, provider invoices continue arriving. Cloud providers will suspend services after failed payments — typically 7-14 days grace, then termination. The fleet could go dark not from an attack but from an unpaid invoice.
Mitigations:
-
Pre-funded runway. The operating account maintains a minimum balance covering 6 months of infrastructure costs. This provides a window between “engineer unavailable” and “trusted actor assumes financial control.”
-
Auto-pay enabled. All provider accounts configured with automatic payment from the operating account. No manual invoice approval required. Invoices are paid even if nobody logs in.
-
Payment failure alerts. Provider payment failure notifications routed to both the lead engineer and the trusted actors’ email addresses. A failed payment is an early warning that something is wrong — either the account is underfunded or payment method has expired.
-
Trusted actor bank access. The sealed envelope includes operating account access (or the legal authority to obtain it). The trusted actor can replenish the operating account if the runway is depleted.
-
Annual review. The 6-month runway balance is verified during each periodic context refresh (Section 12.5). If infrastructure costs have increased (new relay groups, more storage), the runway balance is adjusted.
The timeline:
Day 0: Lead engineer becomes unavailableDay 1-30: Auto-pay handles all invoices. Fleet runs on autopilot.Day 30+: If trusted actor has not yet activated, runway covers costs.Day 180: Runway depleted. Trusted actor MUST have assumed control by now. If not, providers begin suspension. Fleet degrades. Iagon nodes and decentralised archive persist regardless. User data survives on the unkillable layer even if managed infrastructure goes dark.Even in the absolute worst case — nobody takes over, every provider suspends, every managed relay goes offline — the encrypted archive exists on the Iagon network and on the recovery machines’ local storage. The data does not die. Recovery is possible whenever someone eventually activates a recovery machine.
12.7 What the Trusted Actor Must NOT Do
Section titled “12.7 What the Trusted Actor Must NOT Do”The sealed envelope and recovery runbook explicitly state:
- Do not attempt to modify the relay code. The fleet runs as-is. If something is broken at the code level, it requires a developer, not an operator.
- Do not rotate all credentials simultaneously. Change one thing at a time, verify it works, then proceed.
- Do not shut down relays that appear healthy. If in doubt, observe. The system is designed to self-heal.
- Do not access user data. The recovery machine has operational credentials, not user key material. Zero-knowledge holds even for the recovery operator.
- When uncertain, ask the AI CLI. The instance has the context to advise on any operational question about the system.
12.8 Legal Framework
Section titled “12.8 Legal Framework”Business continuity requires legal authority, not just technical access. The trusted actors must have:
- Power of attorney for operations (Swedish law)
- Authority to pay invoices from operating accounts
- Authority to communicate with providers on behalf of the operator
- Understanding of their obligations (keep the service running, protect user privacy, do not access user data)
The legal documentation is prepared by counsel and included in the sealed envelope. It is reviewed annually or whenever the trusted actor arrangement changes.
12.9 The Chain of Survival
Section titled “12.9 The Chain of Survival”Lead operational → Normal operations, all tiers healthyLead temporarily unable → Fleet runs on autopilot (days to weeks) HQ auto-heals relay failures No human intervention neededLead extended absence → Trusted actor activates Tier 3 Recovery machine boots, AI CLI guides Invoices paid, credentials managed, fleet monitoredLead permanently unable → Trusted actor assumes operations Legal authority via sealed envelope Full system knowledge via AI CLI instance Business decisions escalated to board/counselAt every stage, user data remains protected. The zero-knowledge guarantee is independent of who operates the infrastructure. A trusted actor with full operational access cannot read a single journal entry, financial record, or receipt. The filing cabinet’s lock does not have a master key. Not for the lead. Not for the trusted actor. Not for anyone.
13. Summary
Section titled “13. Summary”The relay pool orchestration model extends 0k-sync’s multi-relay fan-out from client-side protocol to fleet-scale infrastructure. Three layers — distributed client detection, relay aggregation and verification, and HQ decision authority — form a self-healing system where dead relays are detected in minutes, confirmed through independent verification, and replaced automatically.
Provider diversity ensures no single provider failure affects any user’s redundancy. DNS-based relay identity ensures replacements are transparent to clients. HQ as a fortress holds the global view and the spawn authority while maintaining zero contact with user data.
The decentralised cold archive is the final layer — the unkillable floor beneath all managed infrastructure. Encrypted blobs stored on self-operated Iagon nodes and distributed across the wider Iagon network on Cardano. Pre-funded with existing IAG holdings. Tied to on-chain payment receipts that prove storage entitlement without any managed service. It transforms the most extreme adversary scenario from “data loss risk” to “slower recovery.” The data survives because there is nothing to destroy.
The launch topology is deliberately minimal — two relay groups (EU and APAC), 18 containers across 6 provider instances, ~€330/month. Capacity for 1-2M users. New groups added at every 500K user milestone, each adding 9 containers on 3 providers. The system scales linearly and gets more resilient the larger it grows.
Business continuity extends beyond infrastructure to human operations. Two recovery machines with fully loaded AI CLI instances, sealed envelopes with trusted actors, and legal authority documentation ensure the system survives not just technical failures but the incapacitation of its creator. The trusted actor needs a terminal and a passphrase — the AI CLI provides the knowledge.
The system degrades gracefully through every failure mode. A dead relay: fan-out handles it. HQ down: fleet continues serving. Managed infrastructure seized: Iagon archive holds. Lead unavailable: trusted actor with AI CLI takes over. At every tier, Ring 1 holds. At every tier, the data is opaque. At every tier, the filing cabinet survives.
The filing cabinet cannot be burned. The final boss cannot be beaten. There is nothing to fight.
Version: 2.1.0 | Date: 2026-02-09