Using a scratch disk with Edera
Scratch disks are temporary, high-speed storage volumes used during workload execution. They’re designed for performance—not persistence.
By default, Edera zones use a memory-backed scratch disk (tmpfs
) layered with overlayfs
. This works for many workloads, but it doesn’t scale well for disk-heavy jobs. For example, a workload that needs 60 GB of disk would require over 120 GB of memory and a host with at least 125 GB of available RAM.
To solve this, Edera supports multiple scratch disk backends—including loopback images and block devices—that let you decouple disk from memory.
Warning
Scratch disks are not durable. All data is wiped when a zone restarts or shuts down. Do not store important data here.
Why use a scratch disk?
- High disk demand with low memory: Some apps need lots of disk but little memory. A scratch disk avoids overprovisioning RAM.
- Better performance: Block devices offer more durability and speed than memory-backed overlayfs.
- Overlayfs workarounds: Some containers misbehave with overlayfs. You can bypass it with a bind mount.
- Flexible zone sizing: Scratch disks let you tune memory and disk independently per zone.
TL;DR
- tmpfs (default): Edera allocates tmpfs using half of the zone’s memory and uses overlayfs.
- Temporary scratch disk: Use
--create-scratch-disk
withprotect zone launch
to create a loopback disk image. (CLI-only—Kubernetes support coming soon.) - Block device scratch disk: Attach a block device; zone mounts it and formats as ext4 on each launch.
- Bind‑mount workload directories: Use
--mount-scratch-disk
to bypass overlayfs on specific paths.
Background
When you run workloads in an Edera zone, they require temporary storage (a scratch disk):
-
No scratch disk (default)
Edera allocatestmpfs
(half the zone’s memory) and layers it withoverlayfs
. -
Created scratch disk
Use--create-scratch-disk <size-in‑MB>
to create a loopback file-backed volume. Overlayfs is layered on this disk. -
Attach scratch disk
Use--attach-scratch-disk <disk>
Attach a physical or virtual block device and configure Edera to use it. The disk is formatted with ext4 and overlayfs is used.
See more below on how to claim a block device
Option 1: Temporary scratch disk
Use a loopback file-based disk image for scratch storage:
protect zone launch --name temp-zone --create-scratch-disk 2048
This mounts a temporary disk for all workloads in the zone. Overlayfs is layered on top.
Option 2: Block device scratch disk
Use a physical or virtual block device for better performance.
Step 1: Attach block device
Add a second volume (for example, /dev/sdb
) to your host system and verify it’s visible:
lsblk
Step 2: Configure Edera daemon
Edit /var/lib/edera/protect/daemon.toml
:
[block.devices]
[block.devices.disk0]
path = "/dev/sdb"
Then restart the daemon:
sudo systemctl restart protect-daemon
Step 3: Confirm device availability
protect device list
Make sure disk0
appears in the output.
Step 4: Launch a zone using the disk
protect zone launch --name test-zone --attach-scratch-disk disk0
Important
The device is formatted with ext4 on each launch. All existing data will be erased.
Bind-mount inside workloads
To bypass overlayfs and directly mount a scratch disk path:
protect workload launch --zone test-zone --name my-workload --mount-scratch-disk /data my-image:v1.0
To confirm the bind mount:
df -Th /data
If the filesystem is not overlay
, the bind mount is active.
Summary
Mode | Backing | Overlayfs | Persistent Disk |
---|---|---|---|
No scratch disk | tmpfs | Yes | No |
Created scratch disk | Loopback image | Yes | No |
Block device disk | Physical device | Yes/No | No (wiped each launch) |
Coming soon
We’re working on Kubernetes-native support for scratch disks via pod annotations. This will let you use scratch disk features without relying on CLI tooling.
Additional notes
Scratch disks are supported starting in Edera v1.2.0