GPU
GPU passthrough to an Edera zone

GPU passthrough to an Edera zone


This guide walks through how to passthrough a GPU to an Edera zone using protect and run a workload with AMD’s ROCm.

ℹ️
Note: This demo currently supports AMD GPUs only.
We do not yet support NVIDIA GPUs—support is in development.

Watch the demo

Prefer to watch before trying it yourself? Here’s a walkthrough video showing the full process:

Prerequisites

  1. Edera is successfully installed
  2. A host with a supported GPU (for example AMD MI250)

Confirm GPU visibility with:

sudo lspci | grep "Display controller"

Expected output:

11:00.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Aldebaran/MI200 [Instinct MI250X/MI250] (rev 01)
14:00.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Aldebaran/MI200 [Instinct MI250X/MI250] (rev 01)
31:00.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Aldebaran/MI200 [Instinct MI250X/MI250] (rev 01)
34:00.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Aldebaran/MI200 [Instinct MI250X/MI250] (rev 01)
  1. You’ll need to define the GPU in your Protect config file. This tells Edera which GPUs are available for passthrough:
#cat /var/lib/edera/protect/daemon.toml

[pci.devices]
[pci.devices.gpu0]
locations = [
  "0000:11:00.0",
]
permissive = true
msi_translate = false
power_management = true
rdm_reserve_policy = "relaxed"

The key gpu0 is the primary identifier for the GPU in Edera. It links the physical device to the zone and workload later via --device gpu0. You can choose any unique name here (for example gpu1, mi250), but it must not be duplicated and must match the name used in your launch commands.

  1. Access to the Edera image registry
ℹ️
To gain access to Edera, reach out to the customer engineering team at support@edera.dev to discuss your requirements.

Launch a zone with GPU passthrough

Launch a new zone and attach a GPU:

sudo protect zone launch -n zone-gpu0 \
  --device gpu0 \
  --kernel-cmdline-append 'drm.debug=0xff pic=debug' \
  --kernel-verbose \
  -m 8000 \
  -R static \
  -k ghcr.io/edera-dev/zone-amdgpu-kernel:6.14.10

Explanation:

  • -n zone-gpu0: name of the zone
  • --device gpu0: maps to pci.devices.gpu0 which is the requested device for the zone.
  • --kernel-cmdline-append: adds kernel parameters to the zone
  • -m 8000: allocates 8 GB memory to the zone
  • -R static: resource adjustment policy
  • -k: OCI kernel image for zone to use

Check that the zone launched successfully:

sudo protect zone list

Expected outcome:

| name | uuid                                 | state | ipv4           | ipv6                          |
|------|--------------------------------------|--------|----------------|-------------------------------|
| zone-gpu0 | a42b9d46-9069-4cf0-a00c-56738aee2932 | ready  | 10.75.0.2/16   | fdd4:1476:6c7e::2/48          |

Load GPU driver in the zone

sudo protect zone exec -t zone-gpu0 -- /bin/busybox modprobe amdgpu

Confirm the driver is loaded:

sudo protect zone logs zone-gpu0
sudo protect zone exec -t zone-gpu0 -- /bin/busybox lsmod

Expected outcome:

| Module                       | Size    | Used by                                                                 |
|-----------------------------|---------|-------------------------------------------------------------------------|
| amdgpu                      | 14086144| 0                                                                       |
| amdxcp                      | 12288   | 1 amdgpu                                                                |
| i2c_algo_bit                | 12288   | 1 amdgpu                                                                |
| drm_client_lib              | 12288   | 1 amdgpu                                                                |
| drm_ttm_helper              | 16384   | 1 amdgpu                                                                |
| ttm                         | 106496  | 2 amdgpu,drm_ttm_helper                                                 |
| drm_exec                    | 12288   | 1 amdgpu                                                                |
| gpu_sched                   | 61440   | 1 amdgpu                                                                |
| drm_suballoc_helper         | 12288   | 1 amdgpu                                                                |
| video                       | 81920   | 1 amdgpu                                                                |
| drm_panel_backlight_quirks | 12288   | 1 amdgpu                                                                |
| cec                         | 65536   | 1 amdgpu                                                                |
| drm_buddy                   | 24576   | 1 amdgpu                                                                |
| drm_display_helper          | 221184  | 1 amdgpu                                                                |
| drm_kms_helper              | 237568  | 4 amdgpu,drm_client_lib,drm_ttm_helper,drm_display_helper               |
| syscopyarea                 | 12288   | 1 drm_ttm_helper                                                        |
| sysfillrect                 | 12288   | 1 drm_ttm_helper                                                        |
| sysimgblt                   | 12288   | 1 drm_ttm_helper                                                        |
| fb_sys_fops                 | 12288   | 1 drm_ttm_helper                                                        |
| backlight                   | 12288   | 3 amdgpu,video,drm_display_helper                                      |
| fb                          | 77824   | 1 drm_ttm_helper,drm_kms_helper,backlight                              |

Run a GPU workload

Launch a privileged workload using ROCm:

sudo protect workload launch -n workload_gpu0 \
  --privileged \
  --zone zone-gpu0 \
  rocm/rocm-terminal

Check the workload is running:

sudo protect workload list

Expected outcome:

| name           | uuid                                   | zone                                   | state   |
|----------------|----------------------------------------|----------------------------------------|---------|
| workload_gpu0  | 91deb366-fa91-4b6d-8a99-b1ad012f25c6   | 827c4315-07dc-464c-b137-3fa6cff195a9   | running |

Execute the ROCm command:

sudo protect workload exec workload_gpu0 -- sudo rocm-smi

Expected outcome:

$ sudo protect workload exec workload_gpu0 -- sudo rocm-smi
[2025-07-16T10:17:37Z WARN styrolite::wrap] unable to set target GID: Os { code: 1, kind: PermissionDenied, message: "Operation not permitted" }

========================= ROCm System Management Interface =========================
===================================================================================
GPU  Temp   Power  SCLK     MCLK     Fan  Perf  PwrCap  VRAM%  GPU%  
ID   (Edge) (Avg)  (MHz)    (MHz)    (%)        (Watt)         
===================================================================================
0    48.0°C 0.0W   1700Mhz  1600Mhz  0%   auto  560.0W   0%     0%
===================================================================================
============================= End of ROCm SMI Log ================================

Success. We’ve configured the GPU and have launched a workload in an isolated zone.

Cleanup

Destroy the workload:

sudo protect workload destroy workload_gpu0

Unload the driver:

sudo protect zone exec -t zone-gpu0 -- /bin/busybox modprobe -r amdgpu

Destroy the zone:

sudo protect zone destroy zone-gpu0

Next steps

Now that you’ve tested GPU passthrough, you can integrate this into your AI pipeline or run additional ROCm-based workloads within an isolated Edera zone.

Further reading

GPU support in Edera
Edera architecture overview
Edera zones

Last updated on