NVIDIA GPU passthrough to an Edera zone
This guide shows how to passthrough an NVIDIA GPU to an Edera zone using protect, load the NVIDIA driver, and run a GPU-accelerated workload inside the zone.
Prerequisites
Edera is successfully installed.
You have an NVIDIA GPU on your system.
Check GPU presence:
sudo lspci | grep "NVIDIA"Expected output:
b4:00.0 3D controller: NVIDIA Corporation GH100 (rev a1)You’ll need to define the GPU in your Protect config file. This tells Edera which GPUs are available for passthrough:
# /var/lib/edera/protect/daemon.toml [pci.devices] [pci.devices.gpu0] locations = [ "0000:b4:00.0", ] permissive = true msi_translate = false power_management = true rdm_reserve_policy = "relaxed"The key
gpu0is the primary identifier for the GPU in Edera. It links the physical device to the zone and workload later via--device gpu0. You can choose any unique name here (for examplegpu1), but it must not be duplicated and must match the name used in your launch commands.Access to Edera’s NVIDIA zone kernel image:
ghcr.io/edera-dev/zone-nvidiagpu-kernel:6.15.8-nvidia-575.64.05ℹ️To gain access to Edera, reach out to the customer engineering team at support@edera.dev to discuss your requirements.
Launch a zone with NVIDIA GPU passthrough
sudo protect zone launch -n zone-gpu \
--device gpu0 \
--kernel-verbose \
-m 2048 \
-R static \
-k ghcr.io/edera-dev/zone-nvidiagpu-kernel:6.15.8-nvidia-575.64.05 \
--pull-overwrite-cacheExplanation:
-n zone-gpu: name of the zone--device gpu0: maps topci.devices.gpu0which is the requested device for the zone.-m 2048: allocates 8 GB memory to the zone-R static: resource adjustment policy-k: OCI kernel image for zone to use--pull-overwrite-cacheOverwrite image cache on pull
Check that the zone launched successfully:
sudo protect zone listExpected output:
| name | uuid | state | ipv4 | ipv6 |
|----------|---------------------------------------|-------|--------------|------------------------------|
| zone-gpu | 994d83fd-dc41-4b6d-98a9-6a89ad9b45cf | ready | 10.75.0.2/16 | fdd4:1476:6c7e::2/48 |Load GPU driver in the zone
sudo protect zone exec -t zone-gpu -- /bin/busybox modprobe nvidiaConfirm the NVIDIA driver is loaded
Check the zone logs:
sudo protect zone logs zone-gpuWe can also check the device from within the Edera zone by using lsmod:
sudo protect zone exec -t zone-gpu -- /bin/busybox lsmodExpected output:
Module Size Used by Tainted: G
nvidia 12890112 0Build and push a test image
Build your own image and push it to your registry (or the public, ephemeral ttl.sh registry) so the workload can run nvidia-smi:
Create the following file named Dockerfile:
FROM nvidia/cuda:12.9.1-devel-ubuntu24.04
RUN apt update && apt install -y nvidia-utils-575-server=575.64.05Build and push (ttl.sh example):
# build locally -- make sure you run this in the directory with the Dockerfile
docker build -t nvtest:0.0.2 .
# tag for ttl.sh
export ME=`whoami` #you can use something else here, this is for demo purposes only
docker tag nvtest:0.0.2 ttl.sh/${ME}/nvtest:0.0.2
# push
docker push ttl.sh/${ME}/nvtest:0.0.2Note: ttl.sh images are short‑lived and intended for demos only. Use your own registry for anything persistent.
Run a GPU workload
Launch a workload with the NVIDIA GPU:
sudo protect workload launch -n workload-gpu \
--privileged \
--zone zone-gpu \
ttl.sh/${ME}/nvtest:0.0.2 -- /bin/bashCheck it’s running:
sudo protect workload listExpected output:
| name | uuid | zone | state |
|---------------|----------------------------------------|----------------------------------------|---------|
| workload-gpu | dcb4944a-9bd9-4f22-b075-a22e3ceb0fd0 | 994d83fd-dc41-4b6d-98a9-6a89ad9b45cf | running |Verify GPU access via nvidia-smi:
sudo protect workload exec workload-gpu nvidia-smiExpected output:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 575.64.05 Driver Version: 575.64.05 CUDA Version: 12.9 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA H100 NVL Off | 00000000:00:00.0 Off | 0 |
| N/A 42C P0 120W / 400W | 0MiB / 95830MiB | 9% Default |
+-------------------------------+----------------------+----------------------+
| Processes: GPU Memory |
| GPU GI CI PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+Success. We’ve configured the GPU and have launched a workload in an isolated zone.
Cleanup
sudo protect workload destroy workload-gpu
sudo protect zone destroy zone-gpuNext steps
You’ve successfully launched an NVIDIA GPU-enabled Edera zone. You can now run secure, high-performance AI workloads in hardened zones with native driver support.