Zedmos
SIZING

Pick the right box for the deployment, not the spec sheet.

Zedmos throughput is dominated by single-thread CPU score, NIC fast-path support, and how much of the inspection pipeline is enabled. Below is the hardware band we recommend per deployment size — derived from the engine's per-mode cost profile, not a controlled lab benchmark.

RECOMMENDED TIERS

Hardware bands by deployment size

Each tier assumes Zedmos is the only resource-intensive workload on the host. Add headroom for co-resident services (Suricata, log shippers, observability stacks).

SOHO / Home lab
soho
Active devices
Up to 50 active devices
WAN bandwidth
Up to 200 Mbps WAN
NIC
1 GbE, igb(4) or em(4)
CPU
Intel N100 / J4125 class, 4 cores, strong single-thread score
RAM
4 GB
Disk
120 GB SSD
Small office
small
Active devices
50-250 active devices
WAN bandwidth
200-500 Mbps WAN
NIC
1 GbE multi-queue, igb(4)
CPU
Intel i3 / Xeon E-23xx, 4-6 cores
RAM
8 GB
Disk
240 GB SSD
Branch
branch
Active devices
250-1000 active devices
WAN bandwidth
500 Mbps - 1 Gbps WAN
NIC
1 GbE multi-queue or 10 GbE, ix(4) / ixl(4)
CPU
Intel i5 / Xeon E-24xx, 6-8 cores
RAM
16 GB
Disk
480 GB SSD
Mid-market
midmarket
Active devices
1000-2500 active devices
WAN bandwidth
1-2 Gbps WAN
NIC
10 GbE multi-queue, ixl(4)
CPU
Intel Xeon Silver / Gold, 8-12 cores
RAM
32 GB
Disk
960 GB NVMe
Hub / SD-WAN concentrator
hub
Active devices
2500+ active devices
WAN bandwidth
2-5 Gbps WAN
NIC
10/25 GbE multi-queue, ixl(4) or mlx5(4)
CPU
Intel Xeon Gold / AMD EPYC, 16+ cores
RAM
64 GB
Disk
1.92 TB NVMe
Why no Gbps / latency table?
DPI throughput depends on the traffic mix (HTTPS vs QUIC vs bulk), NIC offload (LRO, checksum, RSS queues), and single-thread CPU score. A synthetic 64-byte UDP flood number tells you nothing about how the box will behave on real traffic. Pick the tier below that matches your active-device count and WAN bandwidth, then validate with a 24-hour live tap before going inline.
HOW TO SIZE

What actually moves the needle

Single-thread score over core count
The packet pipeline runs on pinned per-NIC-queue workers. A 4-core CPU with high single-thread score outperforms a 12-core part with low per-core frequency for inline DPI.
Pick a NIC with FreeBSD fast-path support
Intel em(4), igb(4), ix(4), and ixl(4) have native fast-path drivers. Realtek and Broadcom 1 GbE chips work but cost throughput and add jitter under load.
TLS / SNI inspection raises CPU cost
Enabling TLS handshake inspection for category and policy enforcement adds roughly 30-40% CPU per active session. Size up one tier if TLS inspection is on by default.
Log plane needs its own RAM budget
RAM figures above cover the engine and control plane. If you co-host the log store (Elasticsearch, ClickHouse) on the same box, add 8-16 GB on top and put the log volume on a separate NVMe.
VALIDATE BEFORE INLINE

Run it on your traffic, not ours

Deploy in passive monitor mode on a SPAN port for 24 hours. Watch the engine worker CPU under your real peak. If you have headroom on a single core during your busiest hour, you have headroom for inline. If you don't, move up a tier before bridging or routing through it.