Memory ballooning with Edera
This guide provides an overview of how Edera manages memory ballooning, including its default settings, dynamic and static memory allocation modes, and manual adjustment options using the protect CLI. Whether you’re deploying Kubernetes workloads or specialized environments, this guide will help you optimize memory usage effectively.
- Default memory settings: Learn about Edera’s default
maxandtargetmemory values and how to override them. - Dynamic mode: Understand how memory is automatically adjusted based on usage.
- Static mode: Explore fixed memory allocation for advanced use cases.
- Manual adjustments: Use the
protectCLI to fine-tune memory settings. - Pod annotations: Simplify memory configuration directly in your Kubernetes manifests.
For a deeper dive into memory ballooning and advanced configurations, refer to our technical guide on memory ballooning.
Edera default memory settings
- Default max memory:
1024M - Default target memory:
300M(also the minimum allowed)
max, min, or target below 300M, it will be overridden to 300M. You can override this manually using the protect CLI.Memory resource policy
Edera supports two memory allocation modes: dynamic (default) and static.
Dynamic mode (default)
In dynamic mode, memory is adjusted automatically between configured min and max values based on actual usage.
Pods can omit the annotation, or include:
dev.edera/resource-policy: "dynamic"Maximum memory
- Calculated as the sum of all containers’
resources.requests.limits - If unset, defaults to
1024M - If below
300M, it is bumped to300M
Dynamic memory behavior
- Allocation is recalculated at intervals based on actual process usage.
- If ≥90% of memory is unused → zone resizes to half
- If ≥70% of memory is used → memory is increased to:
- The current maximum value, or
- Current usage + 80 MB
Check intervals
- Start at
100mson pod start, settling to2schecks - Adjusts more aggressively on growth, less on shrink
- Eventually stabilizes to equal
2sintervals once usage plateaus
You can override the initial memory target using:
dev.edera/initial-memory-request: "<value in MB>"Static mode (advanced use only)
Important
Static mode disables dynamic ballooning, which can:
- Lock in unused memory
- Prevent scaling memory up or down based on real usage
- Require restarts or manual intervention
Use static mode if:
- You’re running non-Kubernetes workloads
- You need predictable, fixed memory
- You’re tuning performance in specialized environments
Max / min / target memory
In static mode, memory is fixed at the configured value—min, max, and target are all set at launch and won’t change regardless of usage. The only way to modify them is by restarting the pod or using the protect CLI.
Use this annotation:
dev.edera/resource-policy: "static"- All values set to:
- Sum of all containers’
resources.requests.limits dev.edera/initial-memory-requestannotation, or- Default of
1024M
- Sum of all containers’
- Any value below
300Mwill be reset to the default target
Manual memory adjustment with the protect CLI
# Switch between static and dynamic
protect zone update-resources <zone_name> --adjustment-policy static|dynamic
# Set target memory
protect zone update-resources <zone_name> --target-memory 500
# Set minimum memory
protect zone update-resources <zone_name> --min-memory 300
# Set maximum memory
protect zone update-resources <zone_name> --max-memory 700Putting it all together
Apply the Edera RuntimeClass
kubectl apply -f runtime.yamlruntime.yaml
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: edera
handler: ederaVerify it’s created:
kubectl get runtimeclassExpected output:
NAME HANDLER AGE
edera edera 1dDeploy a pod with Edera
memory-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: memory-test
spec:
runtimeClassName: edera
containers:
- name: nginx
image: nginx:1.25.3Apply the pod:
kubectl apply -f memory-pod.yamlConfirm pod is online:
watch kubectl get pods --all-namespaces --field-selector metadata.name=memory-testTest with the protect CLI
Find the node:
kubectl get pods --all-namespaces -o json | jq -r '.items[] | select(.metadata.name == "memory-test") | .spec.nodeName + " " + .metadata.name'SSH into the node and confirm the zone:
protect zone listYou should see something like:
┌────────────────────────────┬──────────────────────────────────────┬───────┬───────────────┬──────────────────────────────┐
│ name ┆ uuid ┆ state ┆ ipv4 ┆ ipv6 │
╞════════════════════════════╪══════════════════════════════════════╪═══════╪═══════════════╪══════════════════════════════╡
│ k8s_memory-hog_memory-test ┆ 08019dbc-efb9-49ac-aa45-f1b46dcccb3a ┆ ready ┆ 10.0.2.249/32 ┆ fe80::d882:bcff:fe45:cc5a/32 │
└────────────────────────────┴──────────────────────────────────────┴───────┴───────────────┴──────────────────────────────┘Watch memory usage in a second terminal:
protect zone topUpdate memory target to 4 GB:
protect zone update-resources --target-memory 4096 k8s_memory-hog_memory-testYou’ll see something like:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Edera Protect ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃name id status total memory used memory free memory ┃
┃ ┃
┃k8s_memory-hog_memor 08019dbc-efb9-49ac-a ready 3.9 GiB 159 MiB 3.8 GiB ┃Alternative: Use pod annotations
apiVersion: v1
kind: Pod
metadata:
name: memory-test
annotations:
dev.edera/initial-memory-request: "4096"
dev.edera/resource-policy: "dynamic"
spec:
runtimeClassName: edera
containers:
- name: nginx
image: nginx:1.25.3Additional notes
Tested with Edera 1.0.3-rc4