Edera architecture overview

Edera architecture overview


TL;DR

Edera provides hardened container isolation by running each workload inside its own lightweight virtual machine called a zone.
Zones are powered by a hypervisor based on Xen (with most components rebuilt in Rust) and packaged as OCI images for fast boot and flexible composition.

This architecture gives every workload its own Linux kernel, eliminates shared-kernel risks, and provides strong isolation without sacrificing cloud-native performance.

Diagram of Edera zone-based architecture

ℹ️
Looking for the full technical deep dive?
👉 Download the Whitepaper.

Big picture

Edera’s architecture is built from several layers that work together:

  • Hypervisor layer: A microkernel hypervisor (Xen) with control components rebuilt in Rust. It enforces memory, CPU, and device boundaries.
  • Zone layer: Lightweight VMs (zones) that run their own Linux kernel and init, plus OCI-packaged system extensions.
  • Workloads: Applications or containers that execute inside zones, each with its own kernel boundary.
  • Driver zones: Special zones that isolate drivers (GPU, NIC, storage) from the host. They expose devices as services to other zones.
  • Orchestration & tooling: Kubernetes RuntimeClass integration and the protect CLI let you launch, manage, and monitor zones.
  • Control plane services: Identity, logging, and metrics services connect Edera into your wider environment (for example: SIEM, IAM).

Hypervisor layer

  • Based on Xen, but with large portions of the control stack rewritten in Rust for safety and maintainability.
  • Mediates CPU scheduling, memory allocation, and interrupts.
  • Provides fast boot and high performance through paravirtualization, while falling back to hardware virtualization when available.

This design avoids the need for nested virtualization, making Edera compatible with nearly all cloud instance types.

Zones

  • Each workload runs in a zone (a lightweight VM).
  • A zone is composed of:
    • A Linux kernel (with PV support enabled).
    • A minimal init process.
    • One or more system extensions, distributed as OCI images.
  • Boots in milliseconds and has near bare-metal performance.

Isolation guarantees:

  • Dedicated kernel per workload → no risk of cross-container kernel exploits.
  • Independent namespaces and cgroups → enforced at the VM boundary instead of the host kernel.

Workloads

  • A workload is the unit of execution inside a zone.
  • Workloads are typically OCI containers, launched with the protect workload command or via Kubernetes scheduling.
  • Workloads within a zone have their own dedicated kernel boundary, since they run inside their own lightweight VM.

Workload lifecycle:

  1. Control plane (Protect CLI or Kubernetes) requests a workload start.
  2. A new zone is provisioned with its kernel + extensions.
  3. The container image is pulled, mounted, and executed as the workload inside that zone.
  4. Workload logs and metrics flow back through the host into your existing monitoring/observability systems.

Benefits:

  • Crash containment → if a workload or its kernel crashes, only that zone is affected.
  • Flexible isolation → workloads can attach to driver zones (GPU, networking, storage) if needed, or stay fully self-contained.

Driver zones

  • Device drivers (for example: NVIDIA GPU stack, NICs, storage) can run in their own driver zone instead of the host.
  • Other zones attach to these driver zones to access hardware.

Benefits:

  • Compromise of a driver does not compromise the host.
  • Easier to update drivers as OCI images instead of patching the base OS.
  • Supports multi-tenant environments where some workloads share access to hardware and others remain isolated.

Orchestration & tooling

  • Kubernetes: Edera integrates via RuntimeClass. Each pod gets scheduled into its own zone automatically.
  • Non-Kubernetes: Use the protect CLI to launch zones and workloads directly:
    • protect zone launch: Start a new zone.
    • protect workload launch: Run a container inside a zone.
    • protect workload exec/attach: Interact with a workload.
  • Automation: Logs, metrics, and events flow back through the host network stack to your monitoring systems.

Control plane services

  • Observability: Each zone emits telemetry via the host → logs/metrics/SIEM.
  • Policy enforcement: Zones can be restricted by policy (for example: CPU/memory quotas, allowed extensions).

Data & control flow

  1. The control plane (Protect CLI or Kubernetes) requests a new workload.
  2. The hypervisor launches a fresh zone, assigning memory and vCPUs.
  3. The zone boots a Linux kernel and mounts its extensions (OCI images).
  4. If needed, the zone attaches to a driver zone for device access.
  5. Workload telemetry flows back to the control plane (metrics/logs).

Deployment models

  • Kubernetes clusters → Per-pod isolation via RuntimeClass.
  • Bare metal / VMs → Launch and manage zones manually with protect.
  • Hybrid → Use Edera for select workloads (for example: CI/CD, GPU jobs, high-trust services) alongside traditional containers.

Further reading

Last updated on