v1.5.1

2 min read



Release Notes

✨ New Features & Enhancements

Additional Kubernetes securityContext field support

securityContext.readOnlyRootFilesystem is now properly supported on container specs

Kubernetes-compliant container exit codes

Container-level exit codes are now properly mapped to CRI exit codes in a Kubernetes-compliant way.

The Kubernetes field terminationGracePeriodSeconds is now properly respected. The Edera CRI will wait that length of time for container workloads to self-terminate before forcible termination, like other Kubernetes CRIs.

Container-level cgroupv2-based limits now work correctly in Kubernetes Pods

Previously, only pod-level limits would be respected. Container-spec-level limits like resources.limits.cpu and resources.limits.mem should now be properly respected in multi-container pods.

PSI support enabled by default in all Edera zone kernels

/proc/pressure/memory, /proc/pressure/cpu, /proc/pressure/io are now available in-zone when using the default Edera zone kernel.

🐛 Bug Fixes

Significant improvements to workload create/destroy reliability and performance

Replaced QEMU sigaltstack handling with libucontext which improves signal handling performance by ~70x

Fixed several bugs that prevented zones from being properly cleaned up in some cases.

Fixed several bugs that prevented workloads from being properly cleaned up if they failed to successfully start.

Fixed bug causing concurrent device ID allocations to sometimes fail or clash with existing allocations.

Fixed several bugs preventing zone creation due to incorrect memory reporting in Kubernetes contexts.

Significant improvements to workload volume mount reliability and performance

Improvements to 9pfs to drastically improve mount reliability and reduce performance problems for PV and PVH zones.

Fixed several bugs that prevented Kubernetes-based pod mounts from working correctly in some cases.

Kubernetes Capability Raise/Drop

Fixed a bug that prevented securityContext.capabilities configurations from being properly applied in-zone to the workload cgroup.

Known issues

K8S: Pods that are OOMKilled by the CRI will report a status of Exited via kubectl, rather than an explicit status of OOMKilled

Upgrade notes

There are no known breaking changes in this release from the previous release v1.5.0.

Last updated on