How Edera’s components fit together
Edera’s hardened runtime layers a Xen-based hypervisor with a host kernel (Dom0), zone kernels (domU), and a set of focused user‑space daemons (daemon, storage, network, etc.). This guide walks through each component, how they interact on real systems.
Component map (at a glance)
- Hypervisor: Xen (PV/PVH), oxenstore (OCaml, memory‑safe), event channels, grant tables.
- Host kernel (Dom0): Aims to look like a normal distro kernel (with Xen + Dom0 enablement).
- Zone kernel (domU): guest kernel that runs your workloads inside zones.
- User‑space daemons:
protect-daemon
(brain/RPC/IDM host),protect-storage
,protect-network
,protect-orchestrator
,protect-cri
. - Utilities shipped on host file system: statically compiled
mkfs.ext4
,squashfs
- Protect CLI: manages hosts, zones, workloads; exposes logs/metrics and introspection commands.
Zones vs. Workloads
-
Zone:
A lightweight VM (PVH/PV guest) with its own kernel, launched and managed by Edera. Zones are the unit of isolation—they hold resources like CPU, memory, and network devices. Think of a zone as the “execution sandbox.” -
Workload:
An OCI image that runs inside a zone. When you launch a workload (protect workload launch --zone <ZONE> <OCI>
), the daemon pulls the image, hands it to the zone over IDM, and styrolite starts it as a process with the requested resources.
Hypervisor & kernels
Xen-OCI (hypervisor)
- Isolation for zones (domU) using PV or PVH
- oxenstore (OCaml KV store) mediates state/config between components via the hypervisor.
- Debug surface:
protect host hv-console
→ Xen boot/warning logs (think: “dmesg
for the hypervisor”).
Host kernel (Dom0)
- Our goal is for the host kernel to be swappable and familiar.
- Common failure modes: Early panics, missing config options.
- Swapability: The installer copies the host‑kernel filesystem image; you can re‑run it to switch LTS kernels quickly.
- Where to look first:
dmesg
/ kernel logs for Dom0; if you suspect the hypervisor: checkprotect host hv-console
. - Where to get all info
protect host hv-debug-info | jq
tells you state of all the devices, this is to help Edera support with troubleshooting.
Zone kernel (domU)
- Purpose‑built guest kernel that boots each Edera zone and runs the workloads. You’ll see the zone’s kernel console via
protect zone logs <zone>
.
User‑space daemons
protect-daemon
(the brain)
- Main RPC server and glue between hypervisor and user space.
- Translates zone/workload specs → Xen domain configs; hosts the IDM channel for zone↔host coordination.
- On workload launch: daemon hands the (container) spec to the zone via IDM; inside the zone, styrolite realizes the spec (cgroups, process, env, etc.).
protect-storage
- Volume mount orchestration for zones/workloads.
protect-network
- Configures zone network devices/links and routes.
- Default : sets up connectivity so zones can talk to zones.
- SR‑IOV / passthrough:
protect-network
does nothing here—you own the wiring. - Kubernetes: networking comes via CNI (currently through CRI).
protect-orchestrator
, protect-cri
, protect-ctl
- Supporting CLIs and CRI glue.
IDM (Inter‑Domain Messaging)
- What it is: shared‑memory + RPC pathway between Dom0 (daemon) and each zone. Messages are protobuf (often shown as JSON when printed).
- What flows: zone bootstrap (hostname, links, randomness seed), device attach/detach, workload spec handoff, resource updates, console streams.
- Randomness seeding: zone receives entropy from Dom0, which in turn gathers from the host’s crypto sources (FIPS‑compatible if present).
- Why it’s powerful for support:
protect host idm-snoop
shows every message across the channel—invaluable for reproductions, tracing workload lifecycle, and even auditing.
/metrics
tree do not arrive over IDM—they’re queried from Xen separately.Audience & expectations
This deep dive assumes the reader already knows about container isolation, Xen, and PCIe.
If you’re earlier in the journey, read the Architecture Overview first, then come back here.