Using storage in Kubernetes
Edera supports native Kubernetes storage APIs. You can attach persistent storage to pods using standard PersistentVolumes and PersistentVolumeClaims.
Edera supports two volume modes, each with different characteristics:
volumeMode: Block– Direct block device passthrough into the zone. Use this for I/O-intensive workloads like databases, message queues, and logging pipelines.volumeMode: Filesystem– Filesystem mounts shared between host and zone via 9pfs. Use this for general-purpose workloads where storage I/O is not the bottleneck.
Choosing a volume mode
Every Edera zone runs its own kernel inside a lightweight virtual machine. How storage reaches that kernel depends on the volume mode you choose.
Block mode
With volumeMode: Block, the block device is passed directly to the zone. The zone’s kernel owns the filesystem and performs I/O without any intermediary. This eliminates the proxy layer entirely and provides near-native storage performance.
Use block mode when your workload is I/O-intensive: databases, message queues, logging pipelines, or anything that benefits from low-latency disk access.
Block devices require a formatted filesystem before Edera can mount them. If your provisioner does not format the device automatically, you must format it yourself before deploying your workload. See the CSI-provisioned volumes section for an example.
Filesystem mode
With volumeMode: Filesystem, storage is shared between the host and the zone using 9pfs. Every I/O operation passes through the hypervisor: the guest kernel issues a request, the hypervisor proxies it to the host, and the result is returned.
This proxy architecture means the hypervisor can inspect and reject (NAK) any write that violates security policy. This is a meaningful security property, but it introduces overhead. Each I/O operation involves additional memory copies, context switches, and scheduling across the guest-host boundary.
Filesystem mode is appropriate for workloads where storage I/O is not the primary bottleneck: web applications, batch processors, CI/CD jobs, and similar compute-bound work.
Summary
| Block | Filesystem | |
|---|---|---|
| Mechanism | Direct device passthrough | 9pfs proxy (guest to host) |
| Performance | Near-native | Reduced (proxy overhead) |
| Security boundary | Device is fully owned by zone | Hypervisor mediates every I/O |
| Best for | Databases, queues, logging | Web apps, batch jobs, CI/CD |
| Cross-boundary sharing | No | Yes |
Example: CSI-provisioned volume
This is the most common pattern for cloud environments. A CSI driver (such as the AWS EBS CSI driver) dynamically provisions a volume when you create a PersistentVolumeClaim.
Step 1: Create a block PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-app-data
spec:
volumeMode: Block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: gp3Step 2: Format the block device
CSI-provisioned block devices are raw and unformatted. Edera requires a formatted filesystem on the device before it can mount it inside the zone.
Use a short-lived Job (running on the default runtime, not Edera) to format the device:
apiVersion: batch/v1
kind: Job
metadata:
name: format-block-device
spec:
template:
spec:
restartPolicy: Never
containers:
- name: formatter
image: alpine:latest
command: ["/bin/sh", "-c"]
args:
- |
apk add --no-cache e2fsprogs
echo "Formatting block device to ext4..."
mkfs.ext4 /dev/data
echo "Format complete."
volumeDevices:
- name: data
devicePath: /dev/data
volumes:
- name: data
persistentVolumeClaim:
claimName: my-app-dataWait for the Job to complete before proceeding:
kubectl wait --for=condition=complete job/format-block-device --timeout=120sStep 3: Deploy your workload
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
annotations:
dev.edera/resource-policy: "static"
spec:
runtimeClassName: edera
containers:
- name: app
image: my-app:latest
volumeDevices:
- name: data
devicePath: /var/lib/my-app/data
volumes:
- name: data
persistentVolumeClaim:
claimName: my-app-dataThe key elements:
volumeMode: Blockon the PVC provides direct device passthroughvolumeDevices(notvolumeMounts) attaches the block device at the specified path- The formatter Job runs without
runtimeClassNameso it uses the default container runtime
Example: Filesystem mount
For workloads that do not require high I/O performance, use a standard filesystem PVC. No formatting step is required.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-app-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: gp3
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
annotations:
dev.edera/resource-policy: "static"
spec:
runtimeClassName: edera
containers:
- name: app
image: my-app:latest
volumeMounts:
- name: data
mountPath: /var/lib/my-app/data
volumes:
- name: data
persistentVolumeClaim:
claimName: my-app-dataNote the differences from the block mode example:
- No
volumeModespecified (defaults toFilesystem) - Uses
volumeMountsinstead ofvolumeDevices - No formatting Job required
Example: Local NVMe device
For bare-metal or instances with local NVMe storage, create a PersistentVolume pointing to the device directly.
PersistentVolume
kind: PersistentVolume
apiVersion: v1
metadata:
name: local-raw-pv
spec:
volumeMode: Block
capacity:
storage: 5Gi
local:
path: /dev/nvme0n1
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-hostPersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: local-block-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
resources:
requests:
storage: 5GiWorkload
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
annotations:
dev.edera/resource-policy: "static"
spec:
runtimeClassName: edera
nodeSelector:
kubernetes.io/hostname: my-host
containers:
- name: app
image: my-app:latest
volumeDevices:
- name: data
devicePath: /mnt/high-perf-storage
volumes:
- name: data
persistentVolumeClaim:
claimName: local-block-pvcAdditional notes
Native Kubernetes storage APIs (PersistentVolumes and block devices) are supported starting in Edera v1.6.0
Non-Kubernetes use cases
If you’re running Edera outside of Kubernetes, see:
- Claiming devices with Edera – Attach block or PCI devices to standalone zones
- Using a scratch disk with Edera – Configure temporary high-speed storage