FAQ
I want to try Edera—how do I get access?
Edera 1.0 is now generally available.
If you’re interested in access or becoming one of our design partners, please reach out via our contact form.
What are the system requirements for running Edera?
Edera is designed for flexibility and minimal host dependencies. Since we install a microkernel to manage containers in guest VMs, there are no userspace package requirements on the host.
Currently supported environments include:
- Kubernetes versions: n-3 support (Kubernetes 1.30–1.33)
- Azure: Azure Linux 2 and 3 (LTS)
- AWS: EKS with Amazon Linux 2023
- Linode: Kubernetes with n-3 support
- Linux kernel: 4.x and newer
- Userspace requirements: None—Edera runs in guest VMs with its own microkernel
Edera is built to work with modern Kubernetes environments and cloud-native infrastructure with minimal friction. Have a specific setup in mind? Reach out to us and we’ll help validate it.
How is Edera implemented?
Edera is built on a lightweight hypervisor derived from Xen, with most components rewritten in Rust to meet modern cloud-native needs.
At the core of Edera’s architecture is the concept of zones—lightweight virtual machines that each run their own Linux kernel and a minimal init service. These zones are made up of a kernel and system extensions, all delivered as standard OCI images.
Instead of relying on heavyweight virtualization, Edera uses Linux’s built-in paravirtualization (via the Xen PV protocol) to launch zones quickly and run them with near-native performance.
Edera does not require hardware virtualization, which is often unavailable in customer environments. Thanks to paravirtualization, we can still isolate workloads and control resources like memory and interrupts—tasks that normally require hardware support.
When hardware virtualization is available, Edera can use it to match bare-metal performance. Without it, there’s a minor performance tradeoff—but you still get stronger isolation and security than what’s possible with traditional container isolation using namespaces and cgroups.
How do I install Edera?
- For installing on EKS, see our install guide
- For installing on Linode, see our install guide
Do I need to use the Edera AMI?
No—but it does make things easier especially for Kubernetes since you don’t have to manually provision every node.
You don’t have to use the Edera AMI, but it’s the recommended path because it:
- Boots into Xen on first launch (no manual kernel or GRUB config).
- Comes pre-installed with Edera and all supporting services.
- Saves you from having to install Xen, modify GRUB, reboot manually, and wire up systemd services.
- Supports dynamic Kubernetes node scaling without manual effort on your part.
Technically, the kernel in the AMI is just vanilla upstream Linux with Xen support toggled on—nothing proprietary. You can bring your own kernel as long as it includes the necessary Xen modules. Talk with us support@edera.dev if you’d like to go down this route.
The AMI is a packaging convenience—not a hard dependency.
How does memory ballooning work in Edera?
For a technical deep dive, see our memory ballooning overview.
For usage and configuration details, check out our memory management guide.
Can I demo Edera?
Yes, you can sign up here to get hands-on with Edera. You can also watch a demo of Edera on our YouTube page.
Why doesn’t docker pull
cache images with Edera?
Edera doesn’t use the Docker image cache—we manage images through our own system for performance and isolation reasons.
Can I run pods with hostNetwork: true
on Edera?
Pods that use hostNetwork: true
aren’t supported with the Edera RuntimeClass
.
This is an intentional design decision. hostNetwork: true
gives pods direct access to the host’s network stack, which breaks the isolation guarantees that Edera provides. These pods are typically used for low-level networking tasks—like CNI plugins or services that configure iptables—and need to run outside of Edera’s sandbox.
If your workload requires hostNetwork: true
, make sure it does not use the Edera RuntimeClass
Are startup times slower with Edera?
Cold start times can be slightly longer, but once the image has been cached (especially on larger nodes), startup is comparable to that of other container runtimes.
How does Edera work without nested virtualization?
Edera is built around a secure microkernel written in Rust. It introduces a concept called zones, which are lightweight virtual machines.
One of these zones—the root zone—runs your standard node operating system. This means you don’t need to replace or modify your OS. From the root zone, Edera can create and manage other zones that host application workloads. Each zone has its own Linux kernel and dedicated resources, isolated and managed by Edera Protect.
Is this like a stripped-down Linux with gVisor on top?
Not exactly.
gVisor intercepts and emulates syscalls to isolate workloads. Edera operates at a lower level: we manage a full virtualized environment for each zone. There’s no syscall interception—instead, the guest kernel runs in paravirtualized mode using features already built into Linux.
Device drivers function normally, and the hypervisor provides a virtual interface—enabling strong isolation and security without requiring hardware virtualization support.
Known issues
Pod startup time when the image is not cached
Cold start times for pods that don’t have images cached can take longer than expected.
Memory reporting quirks (ballooning & kubelet confusion)
Edera splits memory between:
- DOM0: The control domain, which runs Kubelet, Protect, and user processes. This is the host environment in Edera.
- DOMUs: The domains where your workloads run (Edera zones).
The issue:
Kubelet only sees memory allocated to DOM0.
For example, if DOM0 has 5 GB and DOMUs have 27 GB, Kubernetes still thinks the node has only 5 GB total, which can affect scheduling.
Workaround:
- Avoid setting
requests
in your pod specs. - Set
limits
only. This allows the scheduler to allocate resources more freely without being misled by incorrect memory totals.
Heads-up: On small nodes, DOM0 can be starved of memory, which may impact zone startup and Kubelet stability.