Mixed deployment patterns
Most production Kubernetes clusters are heterogeneous—different node types optimized for different purposes: disk-intensive databases, memory-heavy caches, GPU workloads, and now isolation-optimized nodes running Edera.
This guide covers how to deploy Edera nodes as part of a heterogeneous cluster.
Why dedicated nodes need scheduler protection
Think of Edera nodes like reserved EV charging spots in a parking garage. Without a sign saying “EV charging only,” any car can park there—even ones that don’t need to charge. You end up with expensive infrastructure sitting unused while the cars that need it circle the lot.
The same thing happens with Edera nodes. Every Edera node still has the default containerd runtime, so without scheduler protection, any workload can schedule there—even ones that don’t need isolation.
If you’re familiar with Kubernetes patterns for GPU nodes, it’s the same idea: taint specialized nodes so only workloads that need them can schedule there.
Without protection, you get:
- Inefficient use of isolation-optimized infrastructure
- Resource reporting problems (Kubernetes doesn’t understand Edera’s split resource pools)
- Potential for premature evictions when dom0 and domU resource usage diverge
Recommended pattern: Dedicated Edera nodes with taints
Run Edera on a dedicated set of nodes, tainted to prevent non-isolated workloads from scheduling there.
┌──────────────────────────────┐ ┌──────────────────────────────┐
│ Edera Node (tainted) │ │ Standard Node │
│ │ │ │
│ ┌────────────────────────┐ │ │ ┌────────────────────────┐ │
│ │ Edera Zone │ │ │ │ Container │ │
│ │ (isolated workload) │ │ │ │ (standard workload) │ │
│ └────────────────────────┘ │ │ └────────────────────────┘ │
│ ┌────────────────────────┐ │ │ ┌────────────────────────┐ │
│ │ Edera Zone │ │ │ │ Container │ │
│ │ (isolated workload) │ │ │ │ (standard workload) │ │
│ └────────────────────────┘ │ │ └────────────────────────┘ │
│ │ │ │
│ System pods only in dom0 │ │ │
└──────────────────────────────┘ └──────────────────────────────┘Step 1: Taint Edera nodes
Apply a taint to prevent non-Edera workloads from scheduling:
kubectl taint nodes <node-name> edera.dev/isolation=required:NoScheduleSystem pods like kube-proxy will still run in dom0—it tolerates all taints by default. Other system pods may need explicit tolerations.
Step 2: Configure RuntimeClass with tolerations
The RuntimeClass tells Kubernetes how to schedule Edera workloads. Include the toleration so Edera workloads can schedule on tainted nodes:
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: edera
handler: edera
scheduling:
tolerations:
- key: edera.dev/isolation
operator: Equal
value: required
effect: NoSchedulehandler: edera field tells the CRI which runtime to use. Combined with the taint/toleration above, pods will only schedule to Edera nodes. You don’t need a separate nodeSelector in the pod spec.Step 3: Deploy workloads
Workloads only need to specify the RuntimeClass:
apiVersion: v1
kind: Pod
metadata:
name: isolated-app
spec:
runtimeClassName: edera
containers:
- name: app
image: myapp:latestThe RuntimeClass scheduling rules handle the rest—no nodeSelector or tolerations needed in the pod spec.
Specialized node types
For clusters with multiple specialized node types (GPU, high-memory, etc.), extend the RuntimeClass pattern:
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: edera-gpu
handler: edera
scheduling:
nodeSelector:
workload-type: gpu
tolerations:
- key: edera.dev/isolation
operator: Equal
value: required
effect: NoScheduleworkload-type: gpu is an example label. Replace it with whatever labeling convention your cluster uses to identify GPU-enabled nodes (for example, accelerator: nvidia-gpu).This selects nodes that have both the Edera runtime and a GPU. Application developers just specify runtimeClassName: edera-gpu—they don’t need to know the underlying node topology.
Enforcing isolation with Kyverno
Use Kyverno policies to automatically assign the Edera RuntimeClass to pods in designated namespaces. This works regardless of your node topology.
apiVersion: policies.kyverno.io/v1
kind: MutatingPolicy
metadata:
name: assign-edera-runtime
spec:
matchConstraints:
resourceRules:
- apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods"]
operations: ["CREATE", "UPDATE"]
namespaceSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: In
values:
- production
- secure-workloads
mutations:
- patchType: JSONPatch
jsonPatch:
expression: |
has(object.spec.runtimeClassName) ?
[
JSONPatch{
op: "replace",
path: "/spec/runtimeClassName",
value: "edera"
}
] :
[
JSONPatch{
op: "add",
path: "/spec/runtimeClassName",
value: "edera"
}
] MutatingPolicy replaced ClusterPolicy starting in Kyverno v1.17. If you’re running Kyverno v1.16, use apiVersion: policies.kyverno.io/v1beta1. For v1.15 or earlier, use the ClusterPolicy API.See the Kyverno guide for full setup.
Persisting node configuration
Node labels and taints are ephemeral—they get removed during node upgrades or remediation (when the node is deleted and replaced). To persist them:
EKS: Configure labels and taints in your managed node group or launch template.
Cluster API: Set labels and taints in your MachineDeployment spec.
Self-managed: Include labeling and tainting in your node provisioning automation.
Initial rollout
- Start with 2-3 dedicated Edera nodes—install using the installation guide
- Taint those nodes with
edera.dev/isolation=required:NoSchedule - Create the RuntimeClass with tolerations (see above)
- Deploy test workloads to a non-critical namespace
- Validate with the POV validation suite
- Expand by adding more Edera nodes to your node group
Why not mixed-use nodes?
Installing Edera on nodes that also run standard workloads creates operational challenges:
- Limited dom0 resources: By default, dom0 (where standard containers run) only gets 35% of the node’s memory. Running Edera on nodes where you don’t plan to use it artificially limits available resources.
- Resource reporting mismatch: Kubernetes sees one resource pool, but Edera splits it between dom0 and domU. This can cause incorrect capacity calculations.
- Premature evictions: Standard containers may hit OOM conditions before Kubernetes detects pressure, since the kubelet views a conjoined resource pool.
- Inefficient utilization: Isolation-optimized nodes should run isolation workloads.
For production deployments, use dedicated Edera nodes with taints.
Next steps
- Install Edera on your first nodes
- Deploy applications with Edera to start isolating workloads
- Set up Kyverno auto-assignment for namespace-based isolation
- Run the POV validation suite to verify your deployment