Image management and OverlayFS

4 min read · Intermediate


Edera manages container images through its own OCI-compliant pipeline rather than relying on third-party container OCI runtimes like containerd (Docker) or CRI-O. This page explains how images move from a registry into a running container filesystem inside an Edera zone.

Why Edera manages images independently

Each Edera zone runs inside its own virtual machine with a dedicated kernel. The zone boundary is a hard isolation line. Nothing from the host’s default container runtime crosses into the zone. This means the default container runtime image cache, snapshotter, and filesystem layers are not visible to the zone.

Edera implements its own image pull, cache, and mount pipeline to ensure each zone only has access to the images it was explicitly provided. The host’s container runtime is never involved.

The image pipeline

When a zone starts a container workload, Edera handles the image in four stages:

OCI Registry
┌──────────────────────┐
│  Edera Image Cache   │  Pull and store OCI image layers
│  (host filesystem)   │
└──────────────────────┘
┌──────────────────────┐
│  SquashFS Conversion │  Flatten layers into a compressed,
│                      │  read-only filesystem image
└──────────────────────┘
┌──────────────────────┐
│  Zone Disk Mount     │  Mount squashfs as a virtual disk
│                      │  inside the zone
└──────────────────────┘
┌──────────────────────┐
│  OverlayFS Assembly  │  Layer a writable upper dir on top
│  (inside the zone)   │  of the read-only squashfs base
└──────────────────────┘

1. OCI image pull

Edera pulls images directly from OCI-compliant registries (Docker Hub, GHCR, ECR, and others). The pull uses Edera’s own OCI-compliant implementation, not third-party container runtimes like containerd or CRI-O.

Kubernetes imagePullPolicy still applies. Always re-pulls the image, IfNotPresent checks Edera’s cache first.

2. Image cache

Pulled images are stored in Edera’s image cache at /var/lib/edera/protect/cache/image on the host filesystem. This cache is completely separate from the host’s container runtime image store.

docker pull and crictl pull do not populate this cache, and Edera’s cache does not populate those respective container runtime image caches. The two systems are completely independent.

3. SquashFS conversion

Before mounting an image into a zone, Edera converts the OCI image layers into a single SquashFS filesystem image. SquashFS is a compressed, read-only Linux filesystem.

4. OverlayFS assembly inside the zone

Once the squashfs image is mounted as a disk inside the zone, Edera’s init process uses OverlayFS to assemble the final container filesystem:

  • Lower layer (read-only): The squashfs image containing the container’s base filesystem.
  • Upper layer (writable): A tmpfs (memory-backed) filesystem by default, or a scratch disk if configured.

Reads come from the compressed squashfs base. Writes go to the upper layer using copy-on-write semantics. When the zone shuts down, the upper layer is gone. Nothing persists unless you’ve configured persistent storage.

How this differs from containerd and Docker

In a standard container runtime setup:

  1. The OCI container runtime (for example, containerd) pulls OCI image layers and stores them on the host.
  2. The container runtime snapshotter (typically overlayfs) assembles the layers on the host filesystem.
  3. The container process sees the assembled filesystem via mount namespaces, but it’s all on the same kernel, same host.

In Edera:

  1. Edera pulls OCI image layers into its own cache on the host.
  2. The layers are converted to a single squashfs image.
  3. The squashfs is mounted as a virtual disk inside the zone (a separate VM with its own kernel).
  4. OverlayFS runs inside the zone’s kernel, not on the host.

The critical difference: the host container runtime overlayfs runs on the host kernel where a container escape could access other containers’ layers. Edera’s overlayfs runs inside an isolated VM. A container escape stays contained within the zone.

Performance considerations

SquashFS compression reduces disk reads for read-heavy workloads. The first write to a file triggers a copy-up from the lower layer, but subsequent writes hit the upper layer directly. By default the upper layer is tmpfs, which is fast but consumes zone memory. For write-heavy workloads, configure a scratch disk to decouple disk from memory. The initial squashfs conversion adds time to first launch; subsequent launches reuse the cached image.

Related pages

Last updated on