How Edera’s components fit together
Edera’s hardened runtime layers a Xen-based hypervisor with a host kernel (Dom0), zone kernels (domU), and a set of focused user-space daemons (daemon, storage, network, etc.). This guide walks through each component, how they interact on real systems.
Component map (at a glance)
- Hypervisor: Xen (PV/PVH), oxenstore (OCaml, memory-safe), event channels, grant tables.
- Host kernel (Dom0): Aims to look like a normal distro kernel (with Xen + Dom0 enablement).
- Zone kernel (domU): Guest kernel that runs your workloads inside zones.
- User-space daemons: daemon (brain/RPC/IDM host), storage, network, orchestrator, cri.
- Utilities shipped on host file system: statically compiled
mkfs.ext4
,mksquashfs
. - Protect CLI: manages hosts, zones, workloads (
protect zone
,protect workload
, etc.); exposes logs, metrics, and introspection commands.
The full Edera startup sequence. Click to enlarge.
Zones vs. Workloads
Zone:
A lightweight VM (PVH/PV guest) with its own kernel, launched and managed by Edera. Zones are the unit of isolation—they hold resources like CPU, memory, and network devices. Think of a zone as the “execution sandbox.”Workload:
An OCI image that runs inside a zone. When you launch a workload (protect workload launch --zone <ZONE> <OCI>
), the daemon pulls the image, hands it to the zone over IDM, and Styrolite starts it as a process with the requested resources.
Hypervisor & kernels
Xen-OCI (hypervisor)
- Isolation for zones (domU) using PV or PVH.
- oxenstore (OCaml KV store) mediates state/config between components via the hypervisor.
- Debug surface:
protect host hv-console
→ Xen boot/warning logs (think: “dmesg
for the hypervisor”).protect host hv-debug-info | jq
→ dumps state of all devices (used by Edera support for troubleshooting).
Host kernel (Dom0)
- Our goal is for the host kernel to be swappable and familiar.
- Common failure modes: Early panics, missing config options.
- Swapability: The installer copies the host-kernel filesystem image; you can re-run it to switch LTS kernels quickly.
- Where to look first:
dmesg
/ kernel logs for Dom0; if you suspect the hypervisor, checkprotect host hv-console
.
Dom0 startup sequence. Click to enlarge.
Zone kernel (domU)
- Purpose-built guest kernel that boots each Edera zone and runs the workloads. You’ll see the zone’s kernel console via
protect zone logs <ZONE>
.
The Zone startup sequence. Click to enlarge.
User-space daemons
Daemon (the brain)
- Main RPC server and glue between hypervisor and user space.
- Translates zone/workload specs → Xen domain configs; hosts the IDM channel for zone↔host coordination.
- On workload launch: daemon hands the spec to the zone via IDM; inside the zone, Styrolite realizes it.
Styrolite (inside the zone)
- What it is: the lightweight container runtime inside every zone.
- Role: receives workload specs from the daemon over IDM and turns them into running processes.
- Responsibilities:
- Parses the workload spec (OCI image + command/args/env).
- Sets up Linux primitives (cgroups, namespaces, mounts, networking).
- Launches the OCI container process as PID 1 in workload namespace within a zone.
The daemon translates configs → IDM, Styrolite interprets them and actually starts the workload.
Storage
- Manages volume mounts for zones/workloads.
- Today relies on Dom0 kernel modules; VirtIO support will move into the storage daemon in the future.
Network
- Configures zone network devices/links and routes.
- Default: sets up connectivity so zones can talk to zones.
- SR-IOV / passthrough: if used,
protect network
does nothing—you manage wiring directly. - Kubernetes: networking comes via CNI (currently through CRI, planned to move into the network daemon).
Orchestrator and CRI
- Provide orchestration and CRI glue, enabling Kubernetes and external integrations.
IDM (Inter-Domain Messaging)
- What it is: shared-memory + RPC pathway between Dom0 (daemon) and each zone. Messages are protobuf (often shown as JSON when printed).
- What flows: zone bootstrap (hostname, links, randomness seed), device attach/detach, workload spec handoff, resource updates, console streams.
- Randomness seeding: zone receives entropy from Dom0, which in turn gathers from the host’s crypto sources (FIPS-compatible if present).
- Why it’s powerful for support:
protect host idm-snoop
shows every message across the channel—invaluable for reproductions, tracing workload lifecycle, and even auditing.
/metrics
tree do not arrive over IDM—they’re queried from Xen separately.Audience & expectations
This deep dive assumes the reader already knows about container isolation, Xen, and PCIe.
If you’re earlier in the journey, read the Architecture Overview first, then come back here.