CPU performance benchmarking
This guide walks you through benchmarking CPU performance in Edera zones versus standard containers. You’ll deploy a benchmark pod, run multi-threaded CPU tests, and measure overhead—the same way you’d validate any production workload.
By the end, you’ll have concrete numbers showing how Edera performs under your workload’s CPU profile.
Prerequisites
Before you begin:
- Edera is installed and running on your cluster (installation guide)
kubectlis configured and can access your cluster- You have at least one node with the Edera runtime
- You have a baseline node running standard containers (no Edera)
Verify your setup:
# Confirm Edera RuntimeClass exists
kubectl get runtimeclass edera
# Confirm your cluster has nodes available
kubectl get nodesKey annotations for CPU workloads
Edera zones use annotations to configure resources. For CPU workloads, one annotation matters:
metadata:
annotations:
dev.edera/cpu: "8" # Match this to your CPU requestdev.edera/cpu annotation is required for multi-threaded workloads. Without it, zones receive a default number of vCPUs regardless of your Kubernetes CPU request. Always set dev.edera/cpu to match your pod’s CPU request.Step 1: Deploy the Edera benchmark pod
Create a pod that runs sysbench inside an Edera zone. Adjust the CPU count to match your target workload—this example uses 8 CPUs:
kubectl apply -f - <<'EOF'
apiVersion: v1
kind: Pod
metadata:
name: bench-edera
labels:
app: cpu-bench
annotations:
dev.edera/cpu: "8"
spec:
runtimeClassName: edera
restartPolicy: Never
containers:
- name: bench
image: alpine:3.21
command: ["sh", "-c", "apk add --no-cache sysbench > /dev/null 2>&1 && sleep infinity"]
resources:
requests:
cpu: "8"
memory: "2Gi"
limits:
cpu: "8"
memory: "2Gi"
EOFWait for the pod to be ready. Edera zones take slightly longer to start than standard containers—this is normal:
kubectl wait --for=condition=Ready pod/bench-edera --timeout=120sVerify sysbench is installed:
kubectl exec bench-edera -- sysbench --versionExpected output:
sysbench 1.0.20Step 2: Deploy the baseline pod
Deploy an identical pod on a standard (non-Edera) node for comparison:
kubectl apply -f - <<'EOF'
apiVersion: v1
kind: Pod
metadata:
name: bench-baseline
labels:
app: cpu-bench
spec:
restartPolicy: Never
containers:
- name: bench
image: alpine:3.21
command: ["sh", "-c", "apk add --no-cache sysbench > /dev/null 2>&1 && sleep infinity"]
resources:
requests:
cpu: "8"
memory: "2Gi"
limits:
cpu: "8"
memory: "2Gi"
EOFkubectl wait --for=condition=Ready pod/bench-baseline --timeout=120sStep 3: Run the benchmark
Run sysbench on both pods. The test calculates prime numbers using multiple threads—a pure CPU workload with no disk or network I/O.
Match --threads to your CPU count to fully exercise all allocated cores:
# Edera zone
kubectl exec bench-edera -- \
sysbench cpu --cpu-max-prime=20000 --threads=8 --time=60 run
# Baseline
kubectl exec bench-baseline -- \
sysbench cpu --cpu-max-prime=20000 --threads=8 --time=60 runReading the results
Look for the events per second line in the output:
CPU speed:
events per second: 8927.10This is your primary metric. Higher is better. Compare the two numbers directly:
Overhead = (baseline - edera) / baseline × 100Expected result: With dev.edera/cpu set correctly, CPU performance should be within a few percent of baseline.
Step 4: Run multiple iterations
For reliable results, run the benchmark multiple times and average:
for i in 1 2 3 4 5; do
echo "=== Run $i ==="
kubectl exec bench-edera -- \
sysbench cpu --cpu-max-prime=20000 --threads=8 --time=60 run \
| grep "events per second"
donefor i in 1 2 3 4 5; do
echo "=== Run $i ==="
kubectl exec bench-baseline -- \
sysbench cpu --cpu-max-prime=20000 --threads=8 --time=60 run \
| grep "events per second"
doneFive runs gives you enough data to see consistency and calculate a meaningful average.
Cleanup
kubectl delete pod bench-edera bench-baselineScaling the test to your workload
The example above uses 8 CPUs. Adjust for your environment:
| Your workload | CPU request | dev.edera/cpu | --threads |
|---|---|---|---|
| Small service | "2" | "2" | 2 |
| Medium app | "4" | "4" | 4 |
| CPU-heavy app | "8" | "8" | 8 |
| Large compute | "15" | "15" | 15 |
The key rule: dev.edera/cpu, your Kubernetes CPU request, and --threads should all match. Using more threads than available vCPUs will cap throughput and produce misleading overhead numbers.
Troubleshooting
Pod stuck in ContainerCreating
Edera zones take longer to start than standard containers. Wait up to 120 seconds before investigating.
If the pod is still stuck after 2 minutes:
kubectl describe pod bench-ederaCheck the Events section for errors.
Low CPU performance on Edera
If Edera performance is significantly lower than baseline, check that the dev.edera/cpu annotation is set:
kubectl get pod bench-edera -o jsonpath='{.metadata.annotations.dev\.edera/cpu}'If this returns empty, the zone is running with default vCPUs. Delete the pod and redeploy with the annotation.
Pods pending due to insufficient resources
If your pod requests more CPU or memory than the node has available, it will stay in Pending. Check allocatable resources:
kubectl get nodes -o jsonpath='{.items[0].status.allocatable}'Next steps
- Performance validation suite—Network throughput and additional CPU benchmarks
- Deploy your app—Run your application with Edera isolation
- Memory management—Tune memory allocation for zones