Memory bandwidth benchmarking
This guide walks you through benchmarking memory bandwidth in Edera zones versus standard containers. You’ll run single-threaded and multi-threaded memory tests and measure overhead.
By the end, you’ll have concrete numbers showing memory bandwidth under Edera compared to baseline.
Prerequisites
Before you begin:
- Edera is installed and running on your cluster (installation guide)
-
kubectlis configured and can access your cluster - You have at least one node with the Edera runtime
- You have a baseline node running standard containers (no Edera)
Why thread count matters
Memory bandwidth benchmarks are sensitive to thread count. Tools like sysbench and STREAM use multiple threads to saturate memory buses, and results scale with the number of available CPUs.
--threads to the vCPU count on both runtimes. If your Edera zone has 8 vCPUs but the benchmark runs 16 threads, the extra threads compete for CPUs and artificially reduce throughput. This produces misleading overhead numbers that don’t reflect actual memory performance.Key annotations
Same as CPU benchmarking—set dev.edera/cpu to match your pod’s CPU request:
metadata:
annotations:
dev.edera/cpu: "8" # Match this to your CPU requestStep 1: Deploy benchmark pods
Deploy identical pods on both runtimes. This example uses 8 CPUs:
kubectl apply -f - <<'EOF'
apiVersion: v1
kind: Pod
metadata:
name: membench-edera
labels:
app: mem-bench
annotations:
dev.edera/cpu: "8"
spec:
runtimeClassName: edera
restartPolicy: Never
containers:
- name: bench
image: alpine:3.21
command: ["sh", "-c", "apk add --no-cache sysbench > /dev/null 2>&1 && sleep infinity"]
resources:
requests:
cpu: "8"
memory: "2Gi"
limits:
cpu: "8"
memory: "2Gi"
---
apiVersion: v1
kind: Pod
metadata:
name: membench-baseline
labels:
app: mem-bench
spec:
restartPolicy: Never
containers:
- name: bench
image: alpine:3.21
command: ["sh", "-c", "apk add --no-cache sysbench > /dev/null 2>&1 && sleep infinity"]
resources:
requests:
cpu: "8"
memory: "2Gi"
limits:
cpu: "8"
memory: "2Gi"
EOFWait for both pods:
kubectl wait --for=condition=Ready pod/membench-edera --timeout=120s
kubectl wait --for=condition=Ready pod/membench-baseline --timeout=120sStep 2: Run single-threaded benchmark
Start with a single-threaded test. This isolates raw memory bandwidth from any threading effects:
# Edera zone
kubectl exec membench-edera -- \
sysbench memory --memory-block-size=1M --memory-total-size=10G --threads=1 run
# Baseline
kubectl exec membench-baseline -- \
sysbench memory --memory-block-size=1M --memory-total-size=10G --threads=1 runReading the results
Look for the transferred line:
102400.00 MiB transferred (17458.31 MiB/sec)The MiB/sec value is your primary metric. Higher is better.
Expected result: Single-threaded memory bandwidth should be within 1-2% of baseline.
Step 3: Run multi-threaded benchmark
Run with threads matching your vCPU count. Use the same thread count on both runtimes:
# Edera zone — 8 threads for 8 vCPUs
kubectl exec membench-edera -- \
sysbench memory --memory-block-size=1M --memory-total-size=10G --threads=8 run
# Baseline — same 8 threads
kubectl exec membench-baseline -- \
sysbench memory --memory-block-size=1M --memory-total-size=10G --threads=8 runExpected result: Multi-threaded memory bandwidth should be within 2-3% of baseline.
Step 4: Run multiple iterations
For reliable results, run each configuration 5 times:
echo "=== Edera (single-threaded) ==="
for i in 1 2 3 4 5; do
echo -n "Run $i: "
kubectl exec membench-edera -- \
sysbench memory --memory-block-size=1M --memory-total-size=10G --threads=1 run \
| grep "transferred"
done
echo "=== Baseline (single-threaded) ==="
for i in 1 2 3 4 5; do
echo -n "Run $i: "
kubectl exec membench-baseline -- \
sysbench memory --memory-block-size=1M --memory-total-size=10G --threads=1 run \
| grep "transferred"
doneecho "=== Edera (multi-threaded) ==="
for i in 1 2 3 4 5; do
echo -n "Run $i: "
kubectl exec membench-edera -- \
sysbench memory --memory-block-size=1M --memory-total-size=10G --threads=8 run \
| grep "transferred"
done
echo "=== Baseline (multi-threaded) ==="
for i in 1 2 3 4 5; do
echo -n "Run $i: "
kubectl exec membench-baseline -- \
sysbench memory --memory-block-size=1M --memory-total-size=10G --threads=8 run \
| grep "transferred"
doneCalculate overhead:
Overhead = (baseline - edera) / baseline × 100Cleanup
kubectl delete pod membench-edera membench-baselineScaling the test
Adjust thread count for your environment. The critical rule: threads must equal vCPU count on both runtimes.
| Your workload | CPU request | dev.edera/cpu | --threads |
|---|---|---|---|
| Small service | "2" | "2" | 2 |
| Medium app | "4" | "4" | 4 |
| CPU-heavy app | "8" | "8" | 8 |
| Large compute | "15" | "15" | 15 |
Troubleshooting
Unexpectedly high overhead
If Edera shows significantly higher overhead than expected:
Check thread count. Running more threads than vCPUs is the most common cause of inflated overhead numbers. Verify
--threadsmatchesdev.edera/cpu.Check the annotation. Verify
dev.edera/cpuis set:kubectl get pod membench-edera -o jsonpath='{.metadata.annotations.dev\.edera/cpu}'Check for outliers. Occasional runs may show ~50% of expected throughput due to scheduling contention. Run 5+ iterations and discard statistical outliers.
Pod stuck in ContainerCreating
Edera zones take longer to start than standard containers. Wait up to 120 seconds before investigating.
Next steps
- CPU benchmarking—Validate CPU performance with sysbench
- Performance validation suite—Network throughput and additional benchmarks
- Deploy your app—Run your application with Edera isolation