Skip to content

Configuration

SyncConfig {
backend: RelayBackend::Iroh,
..Default::default()
}

Uses iroh public network. No relay infrastructure needed.

iroh Version Strategy:

  • Using iroh 0.96 (pre-1.0, requires cargo patch for curve25519-dalek)
  • iroh-blobs 0.98 for large content transfer
  • Self-hosted infrastructure available via iroh-relay and iroh-dns-server
SyncConfig {
backend: RelayBackend::SyncRelay {
node_id: "your-sync-relay-node-id".parse()?,
relay_url: None, // Uses default iroh-relay, or set custom
},
..Default::default()
}

Self-hosted Docker container. Discovered via mDNS on LAN or DNS for remote. Cloudflare Tunnel can proxy the QUIC connection.

SyncConfig {
backend: RelayBackend::SyncRelay {
node_id: "fly-relay-node-id".parse()?,
relay_url: Some("https://relay.fly.dev".parse()?),
},
..Default::default()
}

Container deployed to Fly.io. NodeId published via DNS TXT record at a known domain.

SyncConfig {
backend: RelayBackend::ManagedCloud {
api_key: "cn_live_xxxx".into(),
},
..Default::default()
}

API key authenticates to CrabNebula managed infrastructure. Discovery service resolves API key to a sync-relay NodeId.

SyncConfig {
backend: RelayBackend::Enterprise {
node_id: "enterprise-relay-node-id".parse()?,
auth: EnterpriseAuth::Oidc {
issuer: "https://auth.corp.com".into(),
client_id: "sync-client".into(),
},
},
..Default::default()
}

Customer-deployed with enterprise auth integration. Dedicated sync-relay identified by NodeId.


CodeNameDescription
1000INVALID_MESSAGEMalformed message
1001UNKNOWN_GROUPGroup ID not found
1002UNAUTHORIZEDAuth failed
1003DEVICE_REVOKEDDevice has been revoked from group
1004NOT_BLOB_OWNEROnly blob creator can force delete
2000RATE_LIMITEDToo many requests
2001BLOB_TOO_LARGEExceeds 1 MB limit
2002GROUP_QUOTA_EXCEEDEDGroup storage full
2003INVALID_PUSH_TOKENPush token format invalid
3000RELAY_OVERLOADEDServer at capacity
3001RELAY_SHUTTING_DOWNGraceful shutdown

⚠️ Thundering Herd Mitigation Required: After relay restart, all clients reconnect simultaneously, potentially crashing the database or exhausting connection limits. Clients MUST implement jittered backoff.

Attempt 1: Wait 1s + jitter
Attempt 2: Wait 2s + jitter
Attempt 3: Wait 4s + jitter
Attempt 4: Wait 8s + jitter
Attempt 5: Wait 16s + jitter
Attempt 6+: Wait 30s (max) + jitter
Jitter: 0-5000ms random (not ±20%)
Reset: On successful connection

Implementation:

async fn reconnect_with_backoff(attempt: u32) {
let base_delay = Duration::from_millis(1000 * 2u64.pow(attempt.min(5)));
let jitter = Duration::from_millis(rand::thread_rng().gen_range(0..5000));
let delay = (base_delay + jitter).min(Duration::from_secs(30));
tokio::time::sleep(delay).await;
}

[server]
bind = "127.0.0.1:8080"
max_connections = 1000
[storage]
database = "/data/relay.db"
max_blob_size = 1048576 # 1 MB
max_group_storage = 104857600 # 100 MB
default_ttl = 604800 # 7 days
[cleanup]
interval = 3600 # 1 hour
vacuum_on_cleanup = true
[limits]
messages_per_minute = 100
connections_per_ip = 10
[sync]
backend = "sync-relay"
# Multi-relay fan-out: list relays in preference order (first = primary)
relay_addresses = ["primary-node-id", "secondary-node-id"]
# or for DNS-based discovery:
# relay_discovery = "https://sync.example.com/.well-known/iroh"
auto_reconnect = true
reconnect_delay_ms = 1000
max_reconnect_delay_ms = 30000

Push fan-out sends encrypted blobs to all configured relays concurrently. The primary relay’s acknowledgement is returned to the caller; secondary results are fire-and-forget. Pull failover tries each relay in order until one responds. Each relay tracks its own cursor independently.

See docs/MULTI-RELAY-SPEC.md for full multi-relay architecture.

VariablePurposeDefault
SYNC_RELAY_ADDRESSESOverride relay NodeIds (comma-separated)Config file
SYNC_API_KEYManaged Cloud API keyNone
SYNC_LOG_LEVELLogging verbosityinfo
SYNC_DEVICE_NAMEHuman-readable nameHostname

⚠️ The 1MB Blob Trap: While sync-relay limits blobs to 1MB (appropriate for state deltas), developers will inevitably try to sync images, videos, and large files. This section provides guidance.

What Belongs in Sync Blobs:

✅ Sync via Relay❌ Store Elsewhere
Transaction recordsImages/photos
User preferencesVideos
App state deltasPDF documents
Small JSON (<100KB)Large binary files
CRDT operationsUser-generated media

Large Asset Strategy:

Large asset strategy: Upload asset to object storage, sync metadata via relay, other devices fetch asset from storage URL

Implementation Example:

// ❌ WRONG: Syncing large files directly
async fn save_photo_wrong(photo_bytes: &[u8], sync: &SyncClient) {
// This will fail if photo > 1MB
sync.push(photo_bytes).await?; // ERROR: BLOB_TOO_LARGE
}
// ✅ CORRECT: Sync metadata, store asset externally
async fn save_photo_correct(
photo_bytes: &[u8],
storage: &ObjectStorage,
sync: &SyncClient,
) -> Result<PhotoRecord> {
// 1. Upload to object storage
let storage_url = storage.upload("photos", photo_bytes).await?;
// 2. Create metadata record
let record = PhotoRecord {
id: Uuid::new_v4(),
storage_url,
mime_type: "image/jpeg".into(),
size_bytes: photo_bytes.len(),
created_at: now(),
};
// 3. Sync only the metadata (< 1KB)
let metadata_blob = serde_json::to_vec(&record)?;
sync.push(&metadata_blob).await?;
Ok(record)
}

Recommended Object Storage Options:

ProviderBest ForPricing Model
Cloudflare R2Cost-effective, no egressPay per storage
Supabase StorageIntegrated with SupabaseGenerous free tier
AWS S3Enterprise, existing AWSPay per everything
Self-hosted MinIOFull controlInfrastructure cost