/ docs

How it actually works.

A high-level overview of the architecture, the configuration MeshWG generates, and the test harness that verifies every claim on this site.

/ 01 architecture

Architecture in five claims.

Hub-and-spoke through one controller

All overlay traffic between your machines passes through a controller you choose (ours, or one you self-host). The controller is the single place to apply policy. Peers don't need to discover each other.

Per-tenant Linux network namespace

Each organization runs in its own kernel network namespace on the controller. Two tenants on the same controller cannot see each other's overlay traffic, even with identical overlay IP ranges.

iptables-enforced ACLs

Allow/deny policies are written into the FORWARD chain of the tenant's namespace. Rules apply at the kernel level — when a policy says deny, the kernel drops the packet before it reaches the destination.

Encrypted hub keys at rest

Each tenant's hub private key is generated server-side and stored AES-256-GCM encrypted in PostgreSQL. The encryption key is read from an environment variable; the controller refuses to start if it's unset.

No internet egress through the hub

By design and verified: the tenant namespace has no MASQUERADE rules, no SNAT, and no path to public networks. The hub forwards overlay traffic between your peers only. Peers reach the internet via their own local network.

/ 02 the config we generate

Standard WireGuard. Nothing custom.

When you add a machine in the dashboard, MeshWG returns a wg-quick configuration that any WireGuard implementation accepts as-is. Below is the exact format.

# wg0.conf — generated for machine "branch-mumbai"
[Interface]
PrivateKey          = <generated server-side, returned once>
Address             = 10.100.0.2/16
MTU                 = 1420

[Peer]
# tenant hub for "acme-networks"
PublicKey           = <tenant hub public key>
Endpoint            = vpn.meshwg.com:51820
AllowedIPs          = 10.100.0.0/16
PersistentKeepalive = 25
/ 03 what it doesn't do

The honest list of things MeshWG isn't.

  • We don't ship firmware. The router runs whatever it normally runs.
  • We don't install an agent on your devices. The WireGuard implementation is the one already in the device.
  • We don't invent a new tunnel protocol. It's WireGuard, RFC-style.
  • We don't intercept your traffic — it's WireGuard-encrypted end to end.
  • We don't retain per-machine private keys. They're shown to you once at creation, then gone.
/ 04 how we verify it works

An end-to-end harness with kernel-level oracles.

Every release runs through a 24-phase test that drives the real controller and two test VMs over SSH, then asserts against actual kernel counters and tcpdump captures — not just HTTP status codes. If anything regresses, the deploy fails.

Auth flow
Real HTTP login → JWT cookie → CSRF + rate-limit checks
Tunnel handshake
Two real Ubuntu VMs do WireGuard handshakes against the controller
Policy enforcement
Apply a deny rule, count actual packets dropped at the kernel
Cross-tenant isolation
tcpdump in tenant B's namespace sees 0 packets while tenant A floods
Concurrent IP allocation
5 parallel POSTs, verify no two devices get the same overlay IP
Policy flap soak
30 toggle cycles, every transition matches expected state
Reconciler soak
Delete the namespace 5 times, restart the service, ping must work each time
Tenant burst
8 parallel org creates, verify each gets a unique port + working wg0
Live ping flood
Flood ping during 10 policy toggles, count kernel drops vs received
Internet egress
Verify external IPs are unreachable from inside a tenant namespace
Long-name regression
60-char org name with special chars must onboard without orphan interfaces
Delete-org
API delete → cascade through devices, users, namespace, no leftover state

Full harness source: e2e/run.sh on GitHub ↗

/ 05 self-host the controller

Run MeshWG on your own VM.

One Linux VM, one PostgreSQL, one Caddy in front for TLS. The install is five copy-paste commands.

# on a fresh Ubuntu 22.04+ host
apt install wireguard-tools iptables postgresql-16 caddy

# clone, build, deploy artifacts
git clone https://github.com/vikasswaminh/meshwg
cd meshwg && go build -o /root/quickmesh ./cmd/quickmesh

install -m 0600 deploy/.env.example /etc/quickmesh.env
# edit /etc/quickmesh.env — set ENCRYPTION_KEY, JWT_SECRET, ADMIN_PASSWORD

install -m 0644 deploy/quickmesh.service /etc/systemd/system/
install -m 0644 deploy/Caddyfile         /etc/caddy/Caddyfile

systemctl daemon-reload && systemctl enable --now quickmesh caddy