CPU management in Edera
Overview
Edera allocates and schedules CPU resources for zones using the Xen type-1 hypervisor and paravirtualized (PV) virtual machines. Unlike traditional container runtimes that share a Linux kernel, Edera enforces CPU allocation and scheduling at the hypervisor level—ensuring strong workload isolation, predictable performance, and secure multi-tenancy.
Terminology
In this guide, we use the following terms:
- Zone: The Edera workload abstraction. Each Edera zone represents an isolated execution environment for customer workloads.
- Paravirtualized virtual machine (PV guest): A virtual machine running in Xen’s paravirtualized (PV) mode. In Edera, each zone runs inside its own PV guest.
- Guest OS / Guest kernel: The operating system running inside an Edera zone.
- domU: Xen’s term for an unprivileged guest domain—what Edera zones run as internally.
- dom0: Xen’s privileged management domain that provides hypervisor control functions and backend drivers.
Throughout this guide, we primarily use zone to describe the execution environment from the Edera perspective.
CPU management
Kubernetes approach
Kubernetes manages CPU resources through the Linux CFS (Completely Fair Scheduler) and cgroups. The CFS provides fair scheduling of processes across the system, while cgroups enforce resource limits and provide isolation between containers. This shared kernel architecture enables efficient resource utilization and is well-suited for most containerized workloads.
CPU requests help the scheduler make placement decisions and ensure a minimum allocation of resources, while CPU limits prevent containers from consuming excessive CPU through cgroup enforcement. The CFS ensures fair distribution of available CPU time among competing processes.
Edera’s approach
Edera uses a different architecture where each zone runs inside its own PV guest (paravirtualized virtual machine) with dedicated vCPU allocation. This approach leverages a type-1 hypervisor (Xen) to manage CPU and memory resources directly at the hardware level, providing stronger isolation compared to container-based solutions.
With hypervisor-level resource management, each zone has its own kernel and virtualized compute resources. The hypervisor’s scheduler handles CPU allocation without the shared kernel dynamics present in container environments. While Xen acts as a central resource manager for all virtual machines, similar to how the Linux kernel manages processes, Xen enforces CPU allocation at the virtualization layer—outside of the guest OS. This eliminates cgroup-based resource contention and provides more predictable performance isolation.
How it works
CPU scheduling in Edera is handled by three key components:
Xen scheduler
Edera uses the Xen type-1 hypervisor to control vCPU execution directly on physical CPUs. Xen supports several scheduling policies, including:
- Credit Scheduler (default): Weighted fair share scheduler. This works by assigning each Edera zone a weight, which determines its proportional share of CPU time relative to the others. A zone with a weight of 512 will get twice as much CPU as a zone with a weight of 256 on contended systems [source]. The scheduler uses a 30 ms time slice for scheduling decisions, automatically balancing CPU-intensive and I/O-heavy workloads. Most workloads function well with the default settings, but advanced settings are available for specialized workloads.
- Credit2 Scheduler: Optimized for latency-sensitive and high-density environments.
- RTDS Scheduler: Real-time, latency-sensitive scheduling.
- ARINC653 Scheduler: Static partitioning for deterministic workloads.
Edera currently uses the Credit Scheduler, which offers a balanced trade-off between fairness and performance for most workloads.
Paravirtualized guests (PV)
Edera zones run as paravirtualized guests (PV guests), interacting with the hypervisor via hypercalls instead of executing privileged CPU instructions. This design:
- Eliminates the need for device emulation or QEMU.
- Uses lightweight PV drivers for CPU signaling and I/O.
- Works on both bare metal and cloud environments without requiring VT-x or AMD-V.
vCPU assignment and topology awareness
Xen provides several features for precise CPU resource control:
- NUMA-aware vCPU pinning: vCPUs can be pinned to specific physical CPUs to optimize performance and memory locality. This allows workloads to benefit from faster memory access on NUMA systems, reducing cross-node latency.
- CPU pools (cpupools): Xen supports cpupools, which partition CPUs into separate scheduling groups. Each pool can run a different scheduler, enabling isolation between workload types—for example, separating latency-sensitive tasks from general workloads.
Edera leverages these Xen capabilities to optimize vCPU placement based on workload needs, improving performance, isolation, and security.
Secure by design
Under the hood, Xen can leverage a privileged management domain, called dom0, to run hypervisor control tools and provide backend services such as disk and networking drivers. Edera zones run as unprivileged guest domains—known as domU—which execute customer workloads.
Because CPU scheduling in Edera happens at the hypervisor layer, vCPU allocations are strictly enforced by Xen—not by the guest OS inside the zone. Each Edera zone operates within its assigned vCPU limits, regardless of what happens inside the guest OS. This design ensures that a compromised guest cannot affect CPU access for other zones.
The Xen hypervisor schedules vCPUs for each zone independently of dom0, ensuring strong isolation between zones. Within each zone, the guest kernel manages thread-level scheduling on its assigned vCPUs—deciding how processes inside the zone utilize the available CPU resources.
This separation ensures that inter-zone CPU isolation is strictly enforced by the hypervisor while allowing each guest to manage its internal workloads independently, combining strong security boundaries with operational flexibility.