Manually disable Edera Kubernetes integration
This guide explains how to manually disable the Edera Kubernetes integration on a node.
Please follow the general troubleshooting steps first or contact Edera support
When to use this
Use this procedure only if:
- The
protect-cri
service is failing and can no longer proxy requests to the systemcontainerd
, disrupting normal Kubernetes operation. - Your node is stuck or degraded due to the Edera runtime integration.
- In the event of an Edera CRI failure that impacts Kubernetes operations and no formal recovery plan is available.
This procedure leaves Edera installed on the node but disconnects it from the kubelet. Workloads that don’t use the runtimeClass
will continue to run.
If you only want to bypass Edera
To run workloads outside Edera, you can remove the runtimeClassName
field from your pod specifications. You don’t need to follow the rest of this guide for that.
AMI-based nodes (recommended recovery)
If your cluster uses AWS-managed node groups or self-managed node groups with AMIs:
- Create a new node pool using stock AWS EKS AMIs.
- Scale down or remove the node pool that uses Edera AMIs.
This method is the safest and most reliable way to remove Edera from your cluster and is preferred over manual steps when available.
Prerequisites
- You have local administrative access to the node.
- You can restart the kubelet.
Step 1: Edit the kubelet configuration
Use the following command to open kubelet configuration files. These may be in /etc/default
, /etc/sysconfig
, or another location depending on your system.
sudo find /etc -name kubelet -exec $EDITOR {} +
In each file, look for a line like the following:
KUBELET_EXTRA_ARGS=" --container-runtime-endpoint=unix:///var/lib/edera/protect/cri.socket"
Remove the line, or remove only the --container-runtime-endpoint
argument. Save and close the file.
Step 2: Revert systemd overrides
Revert any kubelet
service overrides to restore vendor defaults:
sudo systemctl revert kubelet
Step 3: Reload and restart the kubelet
Reload systemd and restart the kubelet:
sudo systemctl daemon-reload
sudo systemctl restart kubelet
This change should not require a full node reboot. Workloads that were not using the edera
runtime class will remain unaffected.
Step 4: Verify node status
Run the following command to verify that the node is healthy:
kubectl get nodes -o wide
The node should be in the Ready
state and running workloads as expected.
Summary
This break-glass procedure disables the Edera container runtime integration without uninstalling Edera. It is intended for emergency use only and should not be part of your normal workflow or recovery plan.