daemon.toml
daemon.toml configures the Edera Protect daemon. It lives at /var/lib/edera/protect/daemon.toml and is generated with defaults the first time the daemon starts.
Restart protect-daemon after editing this file for changes to take effect:
sudo systemctl restart protect-daemonOCI
The [oci] section controls how the daemon pulls container images.
[oci]
docker-hub-mirror = "index.docker.io"Options
| Option | Type | Default | Description |
|---|---|---|---|
docker-hub-mirror | string | "index.docker.io" | Registry hostname to use in place of Docker Hub when resolving unqualified image references. |
docker-hub-mirror
Hostname the daemon contacts whenever an image reference would otherwise resolve to Docker Hub. Set this to a pull-through cache or private mirror to avoid hitting Docker Hub directly.
[oci]
docker-hub-mirror = "mirror.example.com"Zone
The [zone] section sets defaults for zones launched by the daemon.
[zone]
cache-default-kernel = true
cache-default-initrd = true
memory-limit-mb = 1024Options
| Option | Type | Default | Description |
|---|---|---|---|
cache-default-kernel | boolean | true | Cache the default zone kernel image on disk for reuse across zone launches. |
cache-default-initrd | boolean | true | Cache the default zone initrd image on disk for reuse across zone launches. |
memory-limit-mb | integer | 1024 | Default memory limit, in megabytes, applied to a zone when it is launched without an explicit memory size. |
cache-default-kernel
When true, the daemon keeps the default zone kernel image on disk after first use so subsequent zone launches skip re-fetching it. Set to false to always pull a fresh copy.
cache-default-initrd
When true, the daemon keeps the default zone initrd image on disk after first use. Set to false to always pull a fresh copy.
memory-limit-mb
Default memory ceiling, in MiB, for any zone that does not specify its own memory limit at launch. All per-zone/per-pod overrides take precedence over this value.
Network
The [network] section configures the default zone network for zones launched without Kubernetes (via protect zone launch). In a Kubernetes context, all of these settings are ignored and zone/pod networking is managed by the cluster CNI.
[network]
nameservers = [
"1.1.1.1",
"1.0.0.1",
"2606:4700:4700::1111",
"2606:4700:4700::1001",
]
[network.ipv4]
subnet = "10.75.0.0/16"
[network.ipv6]
subnet = "fdd4:1476:6c7e::/48"Options
| Option | Type | Default | Description |
|---|---|---|---|
nameservers | array of strings | Cloudflare public resolvers | DNS resolvers handed to zones. |
ipv4.subnet | string | "10.75.0.0/16" | CIDR from which IPv4 addresses are allocated to zones. |
ipv6.subnet | string | "fdd4:1476:6c7e::/48" | CIDR from which IPv6 addresses are allocated to zones. |
nameservers
DNS resolvers given to every zone. Accepts both IPv4 and IPv6 addresses. Defaults to Cloudflare’s public resolvers (1.1.1.1, 1.0.0.1, 2606:4700:4700::1111, 2606:4700:4700::1001).
[network]
nameservers = ["10.0.0.53", "10.0.0.54"]ipv4.subnet
CIDR the daemon allocates IPv4 addresses from when assigning an interface to a zone. Choose a subnet that does not collide with networks the host already routes to.
[network.ipv4]
subnet = "10.200.0.0/16"ipv6.subnet
CIDR the daemon allocates IPv6 addresses from when assigning an interface to a zone.
[network.ipv6]
subnet = "fd12:3456:789a::/48"PCI
The [pci] section declares PCI devices the daemon may pass through to zones. Each device is a named entry under [pci.devices.<name>]; the name is what you reference with protect zone launch --device <name>.
[pci.devices.gpu0]
locations = ["0000:01:00.0"]
permissive = false
msi-translate = false
power-management = false
rdm-reserve-policy = "strict"
modules = []
[pci.devices.gpu0.module-parameters]Device options
| Option | Type | Default | Description |
|---|---|---|---|
locations | array of strings | required | PCI bus addresses (BDF) that make up this device, for example "0000:01:00.0". |
permissive | boolean | false | Allow the zone to access PCI configuration space normally restricted by the hypervisor. |
msi-translate | boolean | false | Translate MSI interrupts on behalf of the zone. |
power-management | boolean | false | Allow the zone to manage device power state. |
rdm-reserve-policy | string | "strict" | RMRR (Reserved Memory Region) reservation policy. One of "strict" or "relaxed". |
modules | array of strings | [] | Kernel modules to load in the zone for this device. |
module-parameters | table of arrays | {} | Per-module parameters, keyed by module name. |
locations
List of PCI BDF addresses (domain:bus:device.function) that belong to this logical device. Most devices have one address; multi-function devices that must be passed through together list all of their functions here.
locations = ["0000:01:00.0", "0000:01:00.1"]permissive
Set to true to let the zone touch PCI configuration registers the hypervisor would otherwise hide. Required for some passthrough scenarios; reduces isolation when enabled.
msi-translate
Set to true to have the hypervisor translate Message Signaled Interrupts for this device. Needed for devices that require MSI but cannot program it directly under passthrough.
power-management
Set to true to let the zone control the device’s power state (suspend, resume). Off by default to keep the host in charge of device power.
rdm-reserve-policy
How strictly to enforce IOMMU Reserved Memory Regions for this device.
"strict": fail to attach the device if its reserved regions cannot be honored. Default; safer."relaxed": attach the device even if reserved regions cannot be fully honored. May be required for some legacy devices.
modules
Kernel modules to load inside the zone before the device is brought up. Useful when the zone’s default kernel does not auto-load the driver for this device.
modules = ["nvme", "nvme-core"]module-parameters
Per-module parameters to apply when loading the modules listed in modules. Keys are module names; values are arrays of parameter strings.
[pci.devices.nic0.module-parameters]
ixgbe = ["allow_unsupported_sfp=1", "max_vfs=8"]Block
The [block] section declares host block devices the daemon may attach to zones. Each device is a named entry under [block.devices.<name>]; the name is what you reference at zone launch time (for example protect zone launch --attach-scratch-disk <name>). In a Kubernetes context, all of these settings are ignored and zone/pod block device mounts are instead managed directly by native Kubernetes block device APIs.
[block.devices.mydevice]
path = "/dev/sdb"Device options
| Option | Type | Default | Description |
|---|---|---|---|
path | string | required | Absolute path to the host block device, for example /dev/sdb or /dev/nvme0n1. |
path
Absolute path to the host block device that should be exposed to zones under this name. The daemon does not create or partition the device; it must already exist on the host.
[block.devices.scratch0]
path = "/dev/nvme0n1"See also
- Claiming devices with Edera: end-to-end walkthrough for adding
[pci.devices]and[block.devices]entries and attaching them to a zone. - NVIDIA GPU passthrough: example
[pci.devices]configuration for GPUs. - Using storage in Kubernetes - end-to-end walkthrough for using block devices in Kubernetes.