General troubleshooting
RuntimeClass: check installation
To verify the Edera runtime is installed and visible to Kubernetes:
kubectl get runtimeclass -o yamlYou should see output similar to:
apiVersion: v1
items:
- apiVersion: node.k8s.io/v1
handler: edera
kind: RuntimeClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"node.k8s.io/v1","handler":"edera","kind":"RuntimeClass","metadata":{"annotations":{},"name":"edera"}}
creationTimestamp: "2025-05-15T20:48:52Z"
name: edera
resourceVersion: "35453"
uid: f51f6d5f-237a-4c35-bf72-0d69a34ff6cb
kind: List
metadata:
resourceVersion: ""Is the pod running in an Edera zone?
To use the protect cli you will need access to your node:
protect zone listYou should see something like:
┌─────────────────────────────────────┬──────────────────────────────────────┬───────┬───────────────┬──────────────────────────────┐
│ name ┆ uuid ┆ state ┆ ipv4 ┆ ipv6 │
╞═════════════════════════════════════╪══════════════════════════════════════╪═══════╪═══════════════╪══════════════════════════════╡
│ k8s_edera-protect_edera-protect-pod ┆ ab2e8fdc-48dd-4b4e-8c83-0e81228da5f0 ┆ ready ┆ 10.0.2.169/32 ┆ fe80::a4d6:c6ff:fef8:c3aa/32 │
└─────────────────────────────────────┴──────────────────────────────────────┴───────┴───────────────┴──────────────────────────────┘If you don’t have node access, you can check with kubectl:
kubectl describe pod <pod-name> | grep -B 3 ederaExpected output:
Name: edera-protect-pod
Namespace: edera-protect
Priority: 0
Runtime Class Name: edera💡 Tip: Use protect zone list -o yaml for more detailed zone or workload info.
Debugging Edera
Here are some general steps to debug issues with Edera:
Confirm the pod exists
kubectl get pods --all-namespacesCheck Edera sees the pod
(The below requires node access)
protect zone listCheck Edera services
Make sure the systemd services are running:
systemctl status protect-cri
systemctl status protect-daemonExample output:
protect-cri.service - Edera Protect CRI
Loaded: loaded (/usr/lib/systemd/system/protect-cri.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2025-05-15 18:08:36 UTC; 1 weeks 5 days ago
Main PID: 2806 (protect-cri)
CGroup: /system.slice/protect-cri.service
└─2806 /usr/sbin/protect-criView logs
Check logs for the CRI and daemon services:
protect-cri invokes protect-daemon so checking the logs of both of those systemd services will usually show you how/where things went bad.
journalctl -u protect-cri
journalctl -u protect-daemonEnable debug logging
Edit the systemd service file:
sudo vi /usr/lib/systemd/system/protect-cri.service
# Change:
Environment=RUST_LOG=info
# To:
Environment=RUST_LOG=debugThen reload and restart:
sudo systemctl daemon-reload
sudo systemctl restart protect-cri
journalctl -u protect-criCheck pod status
If a pod won’t start:
kubectl describe pod <pod-name> -n <namespace>If no cause is shown, protect zone list may show a zone that failed to start.
Installing Edera manually
If you’ve installed Edera manually, you’ll need to reboot the node post-installation to boot the node with our hypervisor so our CRI can provide the backend for the edera runtime
Verify Edera is running
The Edera services must be in the active state:
sudo systemctl is-active protect-daemon
sudo systemctl is-active protect-criVerify the Kubelet is aware of Edera
The Edera installer hooks into the kubelet as a CLI argument to take precedence over the on-disk config.
ps aux | grep kubeletTo verify the runtime is pointing at Edera.
Kubelet is failing systemctl status kubelet
If Edera Protect is installed, but the Kubelet is failing systemctl status kubelet, it might be due to the configuration file not being correct.
For Amazon Linux it should look like:
cat /etc/default/kubelet
KUBELET_EXTRA_ARGS=" --container-runtime-endpoint=unix:///var/lib/edera/protect/cri.socket"For LKE / Akamai it should look like:
cat /etc/default/kubelet
KUBELET_EXTRA_ARGS="--cloud-provider=external --container-runtime-endpoint=unix:///var/lib/edera/protect/cri.socket"