General troubleshooting

4 min read


First step: run edera-check

If something isn’t working, start by running edera-check postinstall on the affected node:

sudo edera-check postinstall

Or via Docker:

docker run --pull always --pid host --privileged \
  ghcr.io/edera-dev/edera-check:stable postinstall

This validates your system configuration and generates a diagnostic report bundle (a .tar.gz file saved locally). If you need to contact support, send this bundle to support@edera.dev. It contains the system info we need to help diagnose your environment.

RuntimeClass: check installation

To verify the Edera runtime is installed and visible to Kubernetes:

kubectl get runtimeclass -o yaml

You should see output similar to:

apiVersion: v1
items:
- apiVersion: node.k8s.io/v1
  handler: edera
  kind: RuntimeClass
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"node.k8s.io/v1","handler":"edera","kind":"RuntimeClass","metadata":{"annotations":{},"name":"edera"}}        
    creationTimestamp: "2025-05-15T20:48:52Z"
    name: edera
    resourceVersion: "35453"
    uid: f51f6d5f-237a-4c35-bf72-0d69a34ff6cb
kind: List
metadata:
  resourceVersion: ""

Is the pod running in an Edera zone?

To use the protect cli you will need access to your node:

protect zone list

You should see something like:

┌─────────────────────────────────────┬──────────────────────────────────────┬───────┬───────────────┬──────────────────────────────┐
│ name                                ┆ uuid                                 ┆ state ┆ ipv4          ┆ ipv6                         │
╞═════════════════════════════════════╪══════════════════════════════════════╪═══════╪═══════════════╪══════════════════════════════╡
│ k8s_edera-protect_edera-protect-pod ┆ ab2e8fdc-48dd-4b4e-8c83-0e81228da5f0 ┆ ready ┆ 10.0.2.169/32 ┆ fe80::a4d6:c6ff:fef8:c3aa/32 │
└─────────────────────────────────────┴──────────────────────────────────────┴───────┴───────────────┴──────────────────────────────┘

If you don’t have node access, you can check with kubectl:

kubectl describe pod <pod-name> | grep -B 3 edera

Expected output:

Name:                edera-protect-pod
Namespace:           edera-protect
Priority:            0
Runtime Class Name:  edera

💡 Tip: Use protect zone list -o yaml for more detailed zone or workload info.

Debugging Edera

Here are some general steps to debug issues with Edera:

Confirm the pod exists

kubectl get pods --all-namespaces

Check Edera sees the pod

(The below requires node access)

protect zone list

Check Edera services

Make sure the systemd services are running:

systemctl status protect-cri
systemctl status protect-daemon

Example output:

 protect-cri.service - Edera Protect CRI
   Loaded: loaded (/usr/lib/systemd/system/protect-cri.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2025-05-15 18:08:36 UTC; 1 weeks 5 days ago
 Main PID: 2806 (protect-cri)
   CGroup: /system.slice/protect-cri.service
           └─2806 /usr/sbin/protect-cri

View logs

Check logs for the CRI and daemon services: protect-cri invokes protect-daemon so checking the logs of both of those systemd services will usually show you how/where things went bad.

journalctl -u protect-cri
journalctl -u protect-daemon

Enable debug logging

Edit the systemd service file:

sudo vi /usr/lib/systemd/system/protect-cri.service
# Change:
Environment=RUST_LOG=info
# To:
Environment=RUST_LOG=debug

Then reload and restart:

sudo systemctl daemon-reload
sudo systemctl restart protect-cri
journalctl -u protect-cri

Check pod status

If a pod won’t start:

kubectl describe pod <pod-name> -n <namespace>

If no cause is shown, protect zone list may show a zone that failed to start.

Installing Edera manually

If you’ve installed Edera manually, you’ll need to reboot the node post-installation to boot the node with our hypervisor so our CRI can provide the backend for the edera runtime

Verify Edera is running

The Edera services must be in the active state:

sudo systemctl is-active protect-daemon
sudo systemctl is-active protect-cri

Verify the Kubelet is aware of Edera

The Edera installer hooks into the kubelet as a CLI argument to take precedence over the on-disk config.

ps aux | grep kubelet

To verify the runtime is pointing at Edera.

Pod terminated with OOMKilled

Starting in v1.6.0, Edera properly reports OOMKilled status to Kubernetes when a container exceeds its memory limit.

To check if a pod was OOMKilled:

kubectl describe pod <pod-name> | grep -A 5 "Last State"

Expected output for an OOMKilled container:

Last State:     Terminated
  Reason:       OOMKilled
  Exit Code:    137

Common causes:

  • Memory limit set too low for the workload
  • Memory leak in the application
  • Burst memory usage during startup

Solutions:

  • Increase the memory limit in your pod spec
  • Use protect zone metrics <zone> to monitor memory usage
  • Check application logs before the OOM event

Kubelet is failing systemctl status kubelet

If Edera Protect is installed, but the Kubelet is failing systemctl status kubelet, it might be due to the configuration file not being correct.

For Amazon Linux it should look like:

cat /etc/default/kubelet
KUBELET_EXTRA_ARGS=" --container-runtime-endpoint=unix:///var/lib/edera/protect/cri.socket"

For LKE / Akamai it should look like:

cat /etc/default/kubelet
KUBELET_EXTRA_ARGS="--cloud-provider=external --container-runtime-endpoint=unix:///var/lib/edera/protect/cri.socket"
Last updated on