Zone security model
TL;DR
Edera isolates workloads inside zones—each zone runs its own Linux kernel inside a hardware-enforced boundary. Edera’s security promise is that code running inside a zone cannot escape the zone to reach the host node or other zones. The configuration that creates the zone—whether a Kubernetes pod spec or a protect CLI command—is outside Edera’s security boundary. Operators are responsible for ensuring that zone configurations do not undermine isolation.
This document defines Edera’s shared responsibility model: what Edera protects, what operators are responsible for, and how the two work together. Edera can be deployed with Kubernetes or as a standalone runtime, and this model applies to both.
Trust boundaries
Edera’s security model involves three roles:
| Role | Trust level | Description |
|---|---|---|
| Platform operator | Trusted | Installs Edera, configures the environment (Kubernetes cluster or standalone host), manages access control, and gates which zone configurations are applied |
| Workload author | Varies | Writes application code, container images, and deployment configurations. May influence zone configuration through Helm values, CI/CD templates, or CLI scripts. Gated by the operator’s policies |
| Workload | Untrusted | Code executing inside a zone. This is what Edera isolates |
The zone is the security boundary. Everything inside the zone—containers, application code, and the zone’s own Linux kernel—is treated as untrusted. Everything outside the zone—the host node, the Edera runtime, and the hypervisor—is trusted infrastructure.
┌──────────────────── Node ─────────────────────┐
│ │
│ ┌──── Zone ──────────────────┐ │
│ │ │ │
│ │ ┌── Workload ─────────┐ │ │
│ │ │ Container(s) │ │ │
│ │ │ [UNTRUSTED] │ │ │
│ │ └─────────────────────┘ │ │
│ │ │ │
│ │ Zone kernel │ │
│ │ [ISOLATION BOUNDARY] │ │
│ └────────────────────────────┘ │
│ │
│ Edera runtime, hypervisor [TRUSTED] │
└───────────────────────────────────────────────┘The zone configuration—whether a Kubernetes pod spec or a protect CLI invocation—sits at the boundary between the platform operator and the workload author. The workload author may contribute to the configuration, but the operator is responsible for gating what gets applied.
Edera isolates the workload, not the configuration that created it.
What Edera protects
Edera provides hard multi-tenancy at the zone level. Each zone is a hardware-isolated environment with its own Linux kernel, enforced by the hypervisor. Edera defends against:
Zone-to-host escape. A workload running inside a zone cannot break out to reach the host node. The zone boundary is enforced by the hypervisor, not by Linux namespaces or cgroups. Container escape techniques that rely on shared kernel resources—such as /proc/1/root traversal, nsenter, and cgroup notify_on_release—do not work because the zone runs its own kernel. There is no shared kernel to exploit.
Zone-to-zone lateral movement. A workload inside one zone cannot reach another zone’s processes, filesystem, or memory. Each zone is a separate hardware-isolated environment. There is no shared kernel or shared namespace between zones.
Zone kernel exploit leading to node compromise. Even if a workload exploits a vulnerability in the zone’s Linux kernel, the hypervisor boundary prevents that exploit from reaching the host node. The zone kernel and the host kernel are completely separate.
Hypervisor escape from a zone. Edera’s hypervisor is based on Xen, with critical control components rebuilt in Rust. A workload attempting to escape the zone must breach the hypervisor boundary—a significantly harder attack than escaping a Linux namespace.
What Edera does not protect
The following areas are outside Edera’s security boundary. Responsibility falls to the platform operator, cloud provider, or other components in the stack.
| Area | Responsible party | Details |
|---|---|---|
| Zone configuration | Platform operator | Edera trusts the zone configuration it receives. If a configuration grants a zone access to the host filesystem, Edera honors that request. See Zone configuration and the trust boundary |
| Cloud metadata (IMDS) | Cloud provider / platform operator | Cloud instance metadata endpoints are reachable from zones unless the operator or cloud provider restricts access (for example, IMDSv2 hop limits, pod identity) |
| Network traffic filtering | Platform operator / CNI provider | Edera does not filter network traffic between zones or between zones and external services. See the deployment-specific sections below |
| Image supply chain | Platform operator / registry | Edera pulls and runs the container images specified in the zone configuration. Image signing, vulnerability scanning, and registry security are the operator’s responsibility |
| Intra-zone isolation | Standard Linux | Edera makes no promises about container-to-container isolation within a single zone. Standard Linux security mechanisms (seccomp, AppArmor, capabilities) work inside a zone because it runs a real Linux kernel, but the zone boundary—not the container boundary—is Edera’s security claim |
With Kubernetes
When running in Kubernetes, additional responsibilities apply:
| Area | Responsible party | Details |
|---|---|---|
| Pod specification | Platform operator | The pod spec is the zone configuration. Dangerous fields (such as hostPath volumes) are honored because the operator or their admission policies allowed them |
| Kubernetes RBAC | Platform operator | Cluster permissions, service account tokens, and API server access are managed by Kubernetes, not Edera |
| Network policy | CNI provider | Traffic between zones and other pods or services is governed by the CNI plugin and Kubernetes NetworkPolicy |
Without Kubernetes
When running standalone, additional responsibilities apply:
| Area | Responsible party | Details |
|---|---|---|
| CLI and daemon access | Platform operator | The protect CLI and daemon socket (daemon.socket) are the configuration surface. Anyone with access to the CLI or socket can create zones and launch workloads with arbitrary parameters, including host filesystem mounts |
| Network filtering | Platform operator | Edera configures basic forwarding rules via nftables, but traffic filtering between zones and external networks is the operator’s responsibility. See zone networking for details |
| Daemon configuration | Platform operator | The daemon.toml file controls global settings including network subnets and OCI registry mirrors. Protecting this file from unauthorized modification is the operator’s responsibility |
Zone configuration and the trust boundary
Edera executes the zone configuration it receives. If that configuration grants a zone access to host resources, the zone will have that access. The operator is responsible for ensuring configurations do not grant access that undermines zone isolation.
This applies regardless of deployment mode:
- With Kubernetes, the zone configuration is the pod spec, delivered to Edera via the CRI. Edera operates as a CRI-compliant container runtime and honors the Kubernetes runtime contract. This is by design, not by omission—Edera respects the same contract that all Kubernetes runtimes follow.
- Without Kubernetes, the zone configuration is the set of arguments passed to
protect zone launchandprotect workload launch. Volume mounts (-m), kernel options, and other parameters are honored as specified.
In both cases, if the configuration mounts the host filesystem into the zone, the zone will have access to the host filesystem through that mount.
Kubernetes-specific behaviors
hostNetwork is explicitly rejected by Edera. Pods requesting hostNetwork: true will fail with an error: “hostNetwork pods are not supported by Edera as it breaks isolation.” This is the one case where Edera overrides the pod spec because host networking fundamentally breaks zone network isolation.hostPID and hostIPC are silently ignored. Pods requesting hostPID: true or hostIPC: true will start successfully, but the zone will receive its own isolated PID and IPC namespaces regardless. This is safe (the zone remains isolated), but operators should be aware that these fields have no effect in Edera zones. The team is working on adding explicit rejection messages consistent with the hostNetwork behavior.Hardening recommendations
Hardening Kubernetes deployments
Operators should use admission control to prevent pod specs that undermine zone isolation. Both OPA/Gatekeeper and Kyverno are widely used for this purpose. The Kubernetes Pod Security Standards provide a baseline.
The following pod spec fields are relevant to zone isolation:
| Field | Edera behavior | Risk | Recommendation |
|---|---|---|---|
hostPath volumes | Honored | Grants the zone read/write access to the host filesystem, which can undermine isolation | Block via admission control, or restrict to specific read-only paths |
hostNetwork | Rejected | Breaks zone network isolation | Already blocked by Edera |
hostPID | Silently ignored | No security risk (zone gets isolated PID namespace), but misleading to operators | Block via admission control for clarity |
hostIPC | Silently ignored | No security risk (zone gets isolated IPC namespace), but misleading to operators | Block via admission control for clarity |
privileged | Honored | Grants all Linux capabilities inside the zone | Restrict unless the workload requires it. The zone boundary still holds, but this widens the attack surface within the zone |
Hardening standalone deployments
When running standalone, the configuration surface is the protect CLI and the daemon configuration file.
| Area | Risk | Recommendation |
|---|---|---|
| CLI and daemon socket access | Anyone with access can create zones with arbitrary mounts | Restrict access to the protect CLI and daemon.socket using filesystem permissions. Only trusted operators should have access |
Volume mounts (-m) | Host directories mounted into zones are accessible to the workload | Avoid mounting sensitive host paths. Use read-only mounts where possible |
daemon.toml | Global settings including OCI registry mirrors and network subnets | Protect with appropriate filesystem permissions. Monitor for unauthorized changes |
| Network egress | Zones can reach any network destination by default | Apply nftables rules to restrict zone egress to required destinations. Layer custom rules on top of existing Edera forwarding rules |
Isolation mechanisms
Edera enforces the zone boundary through several layers:
- Hypervisor-enforced isolation. Each zone runs in its own hardware-isolated environment, managed by the hypervisor. Memory, CPU, and device access are partitioned at the hardware level.
- Dedicated kernel per zone. Each zone boots its own Linux kernel, separate from the host kernel. A vulnerability in the zone kernel does not affect the host or other zones.
- Filesystem restrictions. Volumes passed into zones are subject to restrictions that prevent access to host sockets, device nodes, and other special file types. Only regular files, directories, and symlinks are accessible.
- Controlled communication. The only communication channel between a zone and the host is inter-domain messaging (IDM), a structured protocol over shared memory. There is no shared filesystem, no shared kernel, and no shared network namespace between the zone and the host.
For a full technical deep dive into how these mechanisms work, see the architecture overview and component guide.
Further reading
- Edera zones—what a zone is and how to run workloads in one
- Architecture overview—how the hypervisor, zones, and workloads fit together
- Standalone operation—running Edera without Kubernetes
- Zone networking—configuring network filtering for standalone zones
- Trail of Bits security assessment—independent review of Edera’s isolation boundaries (October 2025)
- Kubernetes multi-tenancy—Kubernetes documentation on hard and soft multi-tenancy
- Pod Security Standards—Kubernetes baseline, restricted, and privileged pod security profiles