Kata Containers vs Edera

6 min read · Intermediate


Both Kata Containers and Edera run containers inside virtual machines. They take different architectural approaches to get there—but the differences that matter most in practice are operational, not theoretical.

ℹ️
The key difference: Kata is an open-source project with many configuration options and moving parts. Edera is a purpose-built product focused on minimizing operational complexity. The architecture is different, but the operational experience is where the gap is widest.

Operational comparison

This is where Kata and Edera diverge most in practice. The differences show up across the full lifecycle.

Day 0: Planning and requirements

Kata requires hardware virtualization support (VT-x/AMD-V) on every node. In cloud environments, this means bare-metal instances or VMs with nested virtualization enabled. You also need to choose a VMM before you start: QEMU for device compatibility, Cloud Hypervisor for a smaller footprint, Firecracker for boot speed, or StratoVirt/Dragonball for other tradeoffs. Kata offers five VMMs and each one has different feature support for networking, storage, and device hotplug. The Kata hypervisor guide (Kata refers to its VMMs as “hypervisors”) and virtualization design doc cover these tradeoffs.

Edera supports paravirtualization (PV), so it works on standard cloud instances without hardware virtualization extensions. When hardware virtualization is available, it uses PVH mode for better performance. There’s one runtime configuration to make, not a matrix of VMM/networking/storage decisions. The getting started guide shows the full setup.

Day 1: Installation

Kata installs via a privileged container that places artifacts directly on the host (the kata-deploy DaemonSet). This can conflict with compliance policies in managed clusters that restrict privileged containers, requiring coordination with the cluster management layer to allow it. The primary installation path uses a Helm chart.

Edera installs as part of the node image at build time rather than deploying artifacts at runtime. This avoids the privileged-container-at-runtime pattern and means the runtime is present before any workloads start. See the install guides for platform-specific walkthroughs.

Day 2: Upgrades and debugging

This is where the operational gap is widest.

Kata has a compatibility chain: the host kernel, the VMM (for example, QEMU), the Kata runtime shim (containerd-shim-kata-v2), the guest kernel image, and the kata-agent inside each VM must all be compatible. Host kernel updates can break VMM compatibility—there are documented cases of QEMU versions becoming incompatible after kernel upgrades, particularly for features like confidential computing. This means Kata upgrades sometimes need to happen in lockstep with host kernel updates, and compliance requirements that push the host kernel forward can outpace Kata’s update cadence. Logs are spread across the shim (on the host), the kata-agent (inside the guest, accessible via kata-runtime exec), and the VMM—each independently developed.

Edera also has multiple services on the node (CRI, daemon, networking, storage, orchestrator), so it’s not a single binary. Logs come from multiple systemd services. But the key difference is that these are all developed and released together as one product, and the runtime is baked into the node image. Upgrades are a node image rollout rather than a runtime artifact dance. Tools like edera-check can validate system readiness before deployment rather than discovering issues during rollout.

Architecture comparison

Kata’s stack

Kata uses KVM with a Virtual Machine Monitor (VMM) to manage each guest:

┌─────────────────────────────────────┐
│          kubelet                    │
├─────────────────────────────────────┤
│      containerd / CRI-O             │
├─────────────────────────────────────┤
│  containerd-shim-kata-v2 (per pod)  │
├─────────────────────────────────────┤
│      VMM (QEMU / Cloud Hypervisor   │  ← VMM layer
│      / Firecracker / StratoVirt)    │
├─────────────────────────────────────┤
│      KVM                            │
├─────────────────────────────────────┤
│      Guest VM + kata-agent          │
└─────────────────────────────────────┘

The containerd-shim-kata-v2 manages the pod lifecycle and communicates with kata-agent inside the guest via ttrpc. The VMM handles device emulation, memory management, and VM lifecycle. Each pod gets its own shim and VMM process.

Edera’s stack

Edera uses a Xen-derived hypervisor and manages VMs directly:

┌─────────────────────────────────────┐
│          kubelet                    │
├─────────────────────────────────────┤
│       Edera CRI                     │
├─────────────────────────────────────┤
│       Edera Daemon                  │
│   - Zone orchestration              │
│   - Network (virtio backends)       │
│   - Storage (block backends)        │
├─────────────────────────────────────┤
│      Xen-derived Hypervisor         │
├─────────────────────────────────────┤
│           Zone                      │
│   - Dedicated Linux kernel          │
│   - Container workload              │
└─────────────────────────────────────┘

In Edera, each pod runs in its own zone by default—a lightweight VM with a dedicated Linux kernel. Edera’s services (CRI, daemon, networking, storage, orchestrator) handle what Kata splits across containerd-shim-kata-v2, kata-agent, and a separate VMM. The difference is these are all part of one product, developed and released together.

Deployment requirements

RequirementKataEdera
Hardware virtualizationRequired (KVM needs VT-x/AMD-V)Optional—PV mode works without it
Nested virtualization in cloudRequired for standard deployment; peer-pods can avoid itNot required
Typical cloud deploymentBare metal, nested-virt-enabled VMs, or peer-podsStandard instances
Installation methodRuntime artifact deployment (privileged DaemonSet)Build-time node image integration

Edera automatically selects PV or PVH mode based on hardware availability.

Observability

Both give each workload its own kernel. Accessing that kernel’s state differs in approach and depth.

Kata provides host-side metrics through kata-monitor, which aggregates data from each pod’s runtime shim and exposes it as Prometheus endpoints. The shim collects guest metrics from kata-agent via ttrpc. This covers VM-level metrics (CPU, memory, network, storage) and kata-specific metrics (hypervisor stats, agent stats). However, deep kernel observability—PSI metrics, page fault rates, kernel tracing—requires additional instrumentation inside each guest. eBPF-based security tools need to be deployed per-VM since they run inside the guest kernel.

Edera exposes per-zone metrics (CPU, memory usage) and hypervisor-level metrics as Prometheus endpoints without requiring agents inside each zone. Zones also have PSI enabled by default via /proc/pressure/* for in-zone pressure monitoring. Because the hypervisor sits below the zones, Edera can enable Falco for eBPF-based syscall monitoring across all zones from a single host-level instance, rather than deploying Falco inside each VM.

Resource overhead

Kata: Each pod runs a VMM process alongside the guest VM. QEMU is the heaviest option; Cloud Hypervisor and Firecracker have smaller footprints but fewer device options.

Edera: No per-pod VMM process. Zones add approximately 64 MiB of overhead each (kernel, hypervisor, initramfs).

Virtualization modes

Edera supports multiple virtualization modes:

ModeHardware requirementsUse case
PV (paravirtualization)NoneCloud VMs without nested virt
PVHVT-x/AMD-VBetter performance when available

Kata’s VMMs all use KVM, which requires hardware virtualization extensions. The peer-pods project provides a cloud API workaround but doesn’t change the underlying KVM requirement—it moves VM creation to the cloud provider.

Summary

Both Kata and Edera provide VM-level container isolation. The choice comes down to:

  • Operational complexity: Kata requires choosing and maintaining a VMM, managing runtime artifact compatibility with the host kernel, and coordinating multi-component upgrades. Edera bakes the runtime into the node image and ships as one product.
  • Deployment constraints: Kata needs hardware virtualization (bare metal, nested virt, or peer-pods). Edera’s PV mode works on standard cloud instances.
  • Observability: Both expose Prometheus metrics. Edera enables centralized Falco integration from the hypervisor layer. Kata provides VM-level metrics via kata-monitor but requires per-VM deployment for eBPF tooling.

Neither approach is inherently “better”—they solve the same problem with different tradeoffs. Kata offers more flexibility as an open-source project. Edera optimizes for getting isolation running with minimal operational burden.

Further reading

Last updated on