Zedmos
DEPLOY

Three reference architectures. One engine at the heart of each.

A deployment playbook for the common topologies. For each scenario you'll find platform requirements, operational characteristics, and the anti-patterns that most often cause friction in practice.

SINGLE APPLIANCE

One site, one platform, inline or routed.

The shortest path from evaluation to production. A single hardened appliance hosts the full Zedmos stack — engine, policy, identity mapping, and the management UI.

Platform
  • OPNsense 24.x or newer
  • Multi-queue 1 GbE or 10 GbE interface
  • Four CPU cores and 8 GB RAM for gigabit-class inspection
  • Eight CPU cores and 16 GB RAM for 10 Gbps-class inspection
  • A fast-path-capable interface driver — validated at install time
Operational model
  • Local management UI on the appliance itself
  • Local event store with optional SIEM forwarding
  • Hot-reload of policies and threat intelligence
  • Reversible, non-destructive installation
Avoid
  • Deploying directly into enforcement without a monitor-mode baseline
  • Mixing legacy filtering stacks in the same data path
  • Running on unsupported driver generations without preflight validation
HIGH-AVAILABILITY PAIR

Two appliances with synchronised state.

A redundant pair running in active-standby. Policy, identity, and runtime state stay synchronised so that a failure of the primary is transparent to users.

Platform
  • Two identical appliances
  • Dedicated synchronisation interface
  • Virtual IP on the LAN and WAN sides
  • Identical engine build and policy generation on both nodes
Operational model
  • Policy and identity state mirrored in real time
  • Health-aware promotion with operator override
  • Single management pane spans both nodes
  • One-click recovery after a split-brain event
Avoid
  • Running different engine builds on the two nodes
  • Synchronising over a saturated LAN segment
  • Leaving preempt enabled while the synchronisation link is degraded
MULTI-SITE SASE

Hub pair, orchestrator, and distributed spokes.Test phase

A managed overlay that connects branches, cloud egress points, and roaming users to a central policy plane. Enforcement is distributed; policy, identity, and observability are centralised.

Platform
  • Two hub nodes (bare metal or virtualised), eight cores and 16 GB each
  • Hardened orchestrator with a replicated data store
  • Reachable public endpoints for both hubs
  • Spokes: branch appliances, compact Linux gateways, or roaming agents
Operational model
  • Central policy and identity distribution
  • Continuous sub-10-second failover between hubs
  • Structured observability pipeline to the SIEM of your choice
  • Multi-tenant operational model available
Avoid
  • Probing failover with endpoints that can fail together with the hub
  • Oversubscribing a single hub beyond its measured capacity envelope
  • Leaving spokes without fallback endpoints to the backup hub